Pages

Sunday, September 29, 2019

MacArthur and Pianka 1966 & Rodriguez et al. 2019

Blog Author: Crystal Uminski

Citations: MacArthur, R.H., and Pianka, E.R., “On optimal use of a patchy environment.” The American Naturalist, Vol. 100 No. 916 (Nov-Dec 1966), p. 603-609. 
Rodriguez, J. et al., “Does optimal foraging theory explain the behavior of the oldest human cannibals?” Journal of Human EvolutionVol 131 (2019), p. 228-239.
Authors: 
Robert MacArthur (1930 – 1972) was an American ecologist who is noted for his work in niche partitioning. (We’ll be reading his paper on niche partitioning in warblers later in the semester.) MacArthur is credited with establishing the use of the hypothetico-deductive (H-D) method (a process of testing predictions based on existing theory) in the field of ecology. MacArthur was considered a pioneering leader in establishing hypothesis-driven science and worked to provide other ecologists with examples of how H-D science should be conducted. MacArthur first used the H-D method in his 1957 paper, “On the relative abundance of bird species.”  In the six years prior to MacArthur’s paper, Fretwell (1975) estimated that only 5% of papers published in Ecologyinvolved testing a hypothesis. MacArthur helped to steer ecology away from observation-based work (such as the style used by Clements) and towards experimentation. Eric Pianka (b. 1939) is an American evolutionary ecologist who completed post-doctoral work with Robert MacArthur at Princeton University. Pianka has published over 200 scientific papers and is noted for his work on desert lizards. Pianka has been a faculty member at the University of Texas at Austin since 1968.  
Jesus Rodriguez is a Spanish biologist who specializes in the paleoecology of European Pleistocene mammals. He has researched the evolution of mammalian community structure and the influence of climate change on the extinction of megafauna, but now studies human-carnivore interactions using mathematical models. Rodriguez is currently a curator at the National Research Center on Human Evolution in Spain.
MacArthur and Pianka (1966): 
MacArthur and Pianka created mathematical models to determine the optimal utilization of time and energy budgets for a species. Their work is the foundation for optimal foraging theory. In “fine-grained” environments (where the prey species are located in the proportion in which they occur), the time predators spend eating each item is divided into search time (TN­­S) and pursuit time (TN­­P). When additional prey items are added to the diet (N+1), the change in search and pursuit time can be calculated. When predators increase their diet to include a larger number of prey options, search time can be reduced but pursuit time can increase (some of the additional prey items may be more difficult to find or catch). Figure 1 shows that in a hypothetical predator in a fine-grained environment, the optimal diet contains 4 prey species. Increasing the number of prey species would cause a greater increase in pursuit time than a decrease in search time (which is not beneficial to the predator). The S’ curve in Figure 1 represents a scenario where the density of the prey species is reduced by half. The inclusion of S’ shows that the number of prey species in the optimal diet is not static – in the S’ scenario, the optimal diet includes 5 prey species. 
In patchy environments, the hunting time (H) increases when the predators are in less-suitable patches. Delta H represents the increase in hunting time per item when an additional patch of environment is added to the predator’s itinerary. The optimal diet in patchy environments is determined by comparing the change in hunting time to the change in travelling time (T) per prey item caught. Figure 2 shows that in a hypothetical predator in a patchy environment, the optimal number of patches in the itinerary is 3. Increasing the number of patches would decrease the travelling time but increase the hunting time. 
MacArthur and Pianka concluded that in productive, fine-grained environments, predators with low search/pursuit ratios should be more restricted in their diet, but this conclusion does not hold in patchy environments where predators spend most of their time searching. Predators with large pursuit/search ratios should show restricted patch utilization in productive environments where food is dense. Optimal predators that have competition for food sources will shrink patch utilization but not the number of prey items in the diet.
Rodriguez (2019):
Rodriguez uses the framework of optimal foraging theory, as well as the lens of human behavioral ecology, to explain cannibalism in an ancestral human species. Rodriguez’s paper relies on the assumptions in MacArthur and Pianka’s paper about prey choice and prey availability and aimed to answer whether humans were a high- or low-ranked prey type in the diet of ancient hominins. If humans were a high-ranked prey type, this would support the theory of nutritional cannibalism, but if the humans were a low-ranked prey type, this indicates a scarcity of the higher-ranked prey types and that cannibalism was a last-resort mechanism for survival. 
The different prey species in the ancestral human diet were accounted for in the remains identified at an archaeological site. The seven sets of human bones at the site that had “anthropogenic modification” were considered the cannibalized humans. The remains of 9 other mammal species that were larger than 5 kilograms and had evidence of anthropogenic modification were classified as the other prey types. The caloric content of human and non-human prey types was calculated. MacArthur and Pianka’s model assumes that the predator encounters the resources at random in proportion to the resource abundance in the environment, so the amount of each prey type at the archeological site is assumed to be in proportion to the amount of that prey type in the environment. Rodriguez used a random encounter model to predict theoretical prey densities. 
Rodriguez found that rhinos and bison were high-energy prey and that human individuals provided a “moderate amount of energy” (about 13% of the accounted calories). Compared to other prey species, the humans represented a high-ranked resource. Following the Prey Choice Model, high-ranked prey are harvested on encounter. The lower-ranked non-human prey were consumed in proportion to their abundance in the environment, but the high-ranked human prey were harvested in a proportion higher that expected based on estimated abundance. Humans were positively selected as a prey option. Rodriguez proposes that even though humans were a lower-calorie food source, there was a low cost of acquisition, which made humans a higher-ranked prey option. 
The nature of human behavior may have violated the assumptions of random encounter, which contributed to the higher-than-expected number of human prey at the archaeological site. The human victims may have been individuals of the same group as the cannibals and died of natural causes (and if this is the case, the energetic cost of “hunting” this prey is zero). Rodriguez concluded that ancestral humans followed the principles of optimal foraging theory and that cannibalism can be thought of as an adaptive strategy. 

My thoughts:
After reading a biography of MacArthur which emphasized his importance in the movement of hypothesis-driven science, I was a surprised to see that this paper was so heavily grounded in theory. MacArthur did hint at a need for experimentation, indicating that the delta P value “must be empirically determined” for each species, but other than a brief mention that there is “evidence for this from herons,” MacArthur uses mathematical models to explain optimal diets of predators rather than empirical data collected from experimentation. 

I am not very familiar with the field of archaeology, but I did question whether Rodriguez’s results could have been affected by a preservation bias in the fossil record? Were human fossils more prevalent because human fossils were more likely to be preserved than other species? 
In most of the papers that we have looked at in class so far, humans are considered exceptions to the ecological frameworks. It is interesting to see how human behavior (even if it is an ancestral species of human) can fit into the principles of optimal foraging theory. Although modern humans don’t necessarily “hunt” for food, can optimal foraging theory still explain some of our food-seeking behaviors today? There’s a whole host of socioeconomic factors that contribute to the “food desert” effect in urban areas, but can some of the concepts of optimal foraging theory be applied to explain why low-cost, high-calorie, processed foods are so prevalent in certain areas? 




Friday, September 27, 2019

Cole 1954 & Ng'habi et al. 2018

Blog author: Miranda Salsbery

Citations:
Cole, L. C. (1954). The population consequences of life history phenomena. The Quarterly Review of Biology29(2), 103-137.
Ng’habi, K., Viana, M., Matthiopoulos, J., Lyimo, I., Killeen, G., & Ferguson, H. M. (2018). Mesocosm experiments reveal the impact of mosquito control measures on malaria vector life history and population dynamics. Scientific reports8(1), 13949.
Background on authors: 
LaMont Cole was a professor in zoology at Cornell University. He had broad ranging research staring with herpetology, the subject of which he published is first paper at the age of 19. Most of his career was spent blending mathematical methods and models into the field of ecology. In his later work, he focused more on social aspects of ecology, touching on topics such as thermal pollution and pesticides. 
Kija Ng’habi works at the Ifakara Health Institute (IHI) in Tanzania. The IHI is a leading research organization in Africa, specializing in developing, testing and validating innovations for health for over 50 years. Ng’habi has published several other papers on mosquito biology and ecology. 

The population consequences of life history phenomena
Here are some of the main points I got out of it:
 The goal of this paper was really to introduce population modeling to ecology as well incorporate life history traits into said population modeling. Cole lays out several mathematical equations to do this and uses the human population as his primary example. He mainly focuses on understanding and using r, which he terms “the intrinsic rate of natural increase”.  Throughout the paper he explores what happens to population growth when various factors of life history change. 
Nonoverlapping generation, such as annual plants, are relatively easy to represent mathematically. Organisms with overlapping generation introduce complexity. Cole’s models for these are built off of a “generation law”. He includes life history traits such as age at first and last reproduction, age specific fecundity and survivorship. 
One of the major findings of this paper is that a greatly significant life history trait is not the number of offspring produced in a female’s lifetime, as previously thought, but at what age the offspring are produced. The most rapid population growth is attributes to more reproduction early in life. 
Cole also investigate the tradeoffs of a species reproduction only once (semelparity) and multiple times (iteroparity). He found that for semelparity species, the gain in intrinsic rate of growth they would gain from becoming iteroparity is the same as simply increasing their litter size by one. He also found that species with long pre-reproductive periods gain more from iteroparity. Additionally, there is a limiting return on increasing litter size and the number of litters produced. 
Age structure also affects population growth. Any increase in longevity not accompanied with an increase in age of last reproduce will lead to a lower birth rate. The portions of the population that are pre-reproductive, reproductive, and post-reproductive directly affect the intrinsic rate of growth

Mesocosm experiments reveal the impact of mosquito control measures on malaria vector life history and population dynamics 
Main goal
The mail goal of this paper was to use long term mesocosm experiments to evaluate the effect of combining several anti-mosquito methods on the mosquito’s population and life history traits. 
Introduction
Vector control is widespread method for control vector carries diseases such as malaria. Long-lasting insecticidal nets (LLINs) are a common method to reduce the spread of malaria and limit the vectors population (mosquito species Anopheles gambiae). LLINs are limited in their usefulness. They rely on behavioral predisposition of the mosquitos to feed predominantly on humans indoors at night and thus do not prevent them form feeding on other mammals outdoors. Thus, a combination of control measure, such as LLINS in concert with insecticidal eave louvers (EL), or treatment of cattle with the endectocide Ivermectin (IM) by prove to be most effective in lowering vector population and fecundity.

Methods:
 The authors established nine replicate populations mosquito populations large mesocosm chambers and allowed them to stabilize. One human and one cow host for were provide as food. After establishment, LLINs were introduced into 6 of the 9 populations, with 3 remaining intervention-free to act as controls for 8 weeks. After this, IM was administered to cattle within 2 of the LLIN-chambers, and insecticide treated ELs were installed in the houses of other 2 LLIN-chambers. Then, IM and EL treatments were swapped between chambers for a final 8 weeks. Mosquito surveillance was done by sampling larvae and adult females every 2–4 weeks. The authors analyzed the data using Bayesian state space models (SSM). 

Results: 
All three intervention treatments (LLINs, LLINs + EL, LLINs + IM) led to reductions in larval and adult mosquito densities compared to the controls. The introduction of LLINs at the beginning altered this population growth trajectory in all chambers where they were in. Introduction of LLINs was estimated to reduce the weekly adult female survival rate by ~91%. Overall, the combination of LLINs followed by IM had the most disruptive effect on vector populations.
Discussion: 
LLINs lead to a reduction in adult mosquito survival, with little subsequent impact on the fecundity of survivors. Extending intervention coverage to an alternative host (e.g. IM on cattle) rather than applying additional protective methods to housing (EL) appeared most effective. This study providesinsights into the fundamental basis of malaria vector population dynamics and emphasizes the importance of vector population dynamics, despite its limitation. 

My thoughts:
Both papers stress the importance of including life histories in understanding populations. I find it interesting that the idea Cole proposed 65 years is helping us understand and combat a deadly disease today. While Cole focused on human population dynamic, despite a call for more ecological use, I think he would be impressed today to see just how widespread his concept has become. This Cole paper has over 2,000 citations! 
Do you think most people include life history traits in their ecological research? Do you? 





Sunday, September 15, 2019

Hutchinson 1957 & Holt 2009

Blog Author:David Nguyen
Citations:
Hutchinson GE (1957) Concluding remarks. Cold Spring Harbor Symp 22:415–427.
Holt RD (2009) Bringing the Hutchinsonian niche into the 21stcentury: ecological and evolutionary perspective. PNAS 106: 19659–19665
Author background:
Hutchinson made important contributions to ecology and evolution during his career, the most famous of which is his definition of the niche in “Concluding remarks.” 
Holt is a theoretical ecologist and is most well-known for his theoretical work on community ecology like apparent competition and intraguild predation.
Hutchinson’s paper
The two main niche concepts prior to Hutchinson’s work was the Grinellian niche (the habitat of a species) and the Eltonian niche (a specie’s biotic interactions). To clarify issues surrounding the definition of the ecological niche, Hutchinson took a set-theoretic approach to define the niche concept and analyze the uses and implications of his definition.
Hutchinson defined the fundamental niche (N) by considering some n-dimensional space of environmental variables. If the species can survive and reproduce under the conditions defined by some point in the n-dimensional space, then that point is an element of the specie’s fundamental niche. The fundamental niche is the subset (N) of the n-dimensional environmental space where a species can “exist indefinitely.” 
He listed the following limitations of his definition: 1. All points in the fundamental niche correspond to some equal probability of population persistence while all points outside the fundamental niche correspond to zero probability of persistence; 2. Each of the n environmental variables can be linearly ordered; 3. The model captures only a single point in time; 4. Only a few species can be considered at a time. 
Hutchinson then uses his niche definition to describe the possible outcomes of interspecific competition and relates his findings to Gause’s work on competitive exclusion. He considers two species, S1 and S2, with the fundamental niches, N1 and N2. If N2 is a proper subset (lies inside) of N1 and S1 is competitively superior, then S1 will exclude S2; however, if S2 is the superior competitor, then S1 will be exclude from any space in the intersection of N1 and N2, with the overall outcome of coexistence. If N1 and N2 partially overlap, then the competition in the intersection of N1 and N2 allows for only one species existence, while the non-intersecting parts of the species fundamental niches serve as refuges from competition. These outcomes define the realized niche of the species. 
Following lengthy discussion of empirical evidence for and against the competitive exclusion principle, Hutchinson directs the remained of his analysis to problems in characterizing and explaining patterns of community abundance. Since his discussion relies on MacArthur’s broken stick model, which was later rejected by MacArthur (https://www.jstor.org/stable/1935661) because it lacked empirical support, I won’t go into further details.
Holt’s paper
Holt aimed to expand upon Hutchinson’s niche definition by considering how incorporating the effects of feedbacks, space, and evolution. 
As Hutchinson pointed out in his list of limitations, his definition of the niche only allows for a population to persist or not depending on whether it is in its niche. Holt addresses this limitation by introducing the establishment niche, the region of niche space where a species can increase in population when rare, and the persistence niche, which is as it sounds. The establishment niche may e smaller than the persistence niche if the species exhibits positive density dependence. The persistence niche may be smaller than the establishment niche if the species can destroy its environment. Holt points out that these feedbacks are rarely accounted for in current approaches to quantifying the niche.
Holt then explained why spatial considerations must be accounted for. Some species, like weedy plants or diseases, population persistence heavily depends on dispersal rates and metapopulation connectedness. Since species persistence is the main criterion for Hutchinson’s niche concept, Holt argues that spatial processes must be considered when quantifying a specie’s niche. 
Holt then explained the importance of evolutionary considerations. The niche of species may be relatively unchanging over time or can change rapidly. Holt argues that a conceptual framework is needed to understand why species can vary so much in rates of niche evolution.
My thoughts
I liked the way Hutchinson defined the niche and I think that it is impressive that his niche definition has had such an impact on ecology. I chose the Holt paper because I thought it would be interesting to see how theoretical perspectives on the niche have changed over time. As expected, much of Holt’s work focused on bringing much of the biological complexities we know about today into Hutchinson’s niche definition. However, I wonder how ecologists might take these nuances and expansions that Holt raised into account. How can current approaches ecologists use to quantify niches, like species distribution models (mechanistic or statistical) take these complexities into account?


Friday, September 13, 2019

Preston 1962 and Proosdij et al. 2016

Blog Author: Bailey McNichol

Citations:

Preston, F.W. 1962. The canonical distribution of commonness and rarity: part 1.
Ecology 43(2): 185-215.

Van Proosdij, A.S.J., Sosef, M.S.M., Wieringa, J.J., and Raes, N. 2016. Minimum
Required number of specimen records to develop accurate species distribution
models.

Author Background:

Frank Preston is an English-American scholar with expertise in engineering, ecology, and conservation. He studied in London, worked briefly in the U.S. on behalf of his employer in lens/camera manufacturing, returned to London for his Ph.D., and later returned and established a glass research laboratory in Butler, Pennsylvania. It was in the valleys of western Pennsylvania that he developed an interest in conservation, mapping geologic features, studying migration patterns of extinct mammals, and eventually beginning his long endeavor of studying birds. His experience both in ecology and engineering allowed him to examine the theory and mathematical relationships behind ecological rarity and commonness.

The lead author on the companion paper, André S.J. van Proosdij, is currently the Project Manager in Ecology at DAG Netherlands, a consulting group that works in projects related to sustainability, environmental monitoring, and product design. He received his Ph.D. from Wageningen University (Wageningen, Netherlands) and his dissertation focused on determining factors of plant species diversity in Central Africa. He has expertise in botany, modeling (specifically with species distribution models), and biodiversity research.

Preston paper:

Summary of paper/main questions:
Building upon his 1948 paper, Preston introduced the theory behind a new model to conceptualize distributions and abundances of species. He observed that we tend to describe abundances in relative rather than absolute terms, and demonstrated a method of grouping species into geometric classes, or “octaves”, based on the binary logarithm (log base 2) so that each octave represents species with individuals that are two time more abundant than the previous octave. He showed that through log-transformation, species abundances tend to follow a Gaussian (normal) distribution. In this paper, he expanded upon his previous findings and aimed to demonstrate that, in addition to the species abundance distribution following a lognormal distribution, the distributions of the number of individuals per species also follow a lognormal curve. Preston introduced his theoretical framework and equations (some purely his own, some drawing from relationships demonstrated by other scientists) that allow us to test this theory with empirical data. He then used observational surveys of various populations of organisms to test the relationship between the theory and actual ecological distributions of species and individuals, and finally discusses the limitations of these models.

Summary of Methods and Results: 
Preston argued that individuals across numerous species tend to have a lognormal distribution between the species because of the relative rarity and commonness of these species in a given environment. He introduced a “Species Curve” (1948) that includes the logarithmic number of individuals per species as abscissa (i.e., along the x-axis), grouped into octaves that describe how many species in a given sample are ~equally as abundant, and the number of species falling in each octave as ordinate (i.e., on the y-axis). When graphed, the distributions of species amongst the octaves results in a lognormally distributed species curve with the least common species occurring on the left and the most common species occurring on the right. He termed the “modal octave” as the octave with the most species (i.e., most species in a given area are neither very rare nor very common), which occurs at the peak of the curve.
He then introduced a second curve – with the number of individuals per species on the y-axis –  which he referred to as the “Individuals Curves” and which also follows a lognormal distribution and has the same dispersion constant as the species curve. The individuals curve reaches its maximum (just before descending) in correspondence with the position of the most common species on the species curve. He clarified that the theoretical species curve extends infinitely to the left and right, but that the true number of octaves generally lies around ±9 and varies depending on the number of species included (in other words, the octaves shown describe the most probable range of the distribution based on what is known about species abundance). He also introduced the concept of the “veil line”, a vertical line on the left side of the species curve to denote that the abundances of any species occurring below (to the left of) the line have not included due to inadequate sampling.

He called the close lognormal correspondence between the species curve and peak of the individual curve “canonical” in an ecological context to accentuate the importance of these paired distributions. Preston then defined an additional equation that accounts for the number of individuals/pairs within a given area, the Species-Area Curve, which is particularly useful in situations where the population density is uniform over the considered area. The species-area curve may approximate a linear relation in the log-log plot (the Arrhenius plot) because, as the number of individuals and species sampled included increases, the total proportion of species actually occurring in a given area that have been sampled begins to approximate the given “universe” (all species in an area).

After introducing these three theoretical curves, Preston tested the relationships with numerous datasets of observed abundances of species and individuals, including (but not limited to) birds from England/Wales, Finland, the Caribbean (“West Indies”), and Madagascar, land plants of the Galapagos, and flowering plants of the world. In examining curves for the species/individuals distributions of these data, he made the distinction between “isolates” (communities of organisms that have reached some equilibrium but are geographically separated from larger populations, e.g., on an island) and “samples” (that are a subset of a larger geographic community). He validated the results for each dataset by making a comparison to the z-value – a constant exponent that represents the average index over the range of the theoretical regression line (z=0.262). For all of the case studies he assessed, z-values ranged from 0.220-0.325, which surprised even Preston in remaining fairly close to the theoretical index. 

Preston concludes that the predominant factor controlling the slope of the curves is the phenomenon of the “expanding universe”: as you increase the sample size (of both individuals and species that are included), your sample more accurately approximates the environmental abundances. 

Proosdij et al. paper:

I chose this paper because I thought it provided an interesting expansion on the basic ideas of the occurrence of species demonstrated by Preston’s Species-Area curve. Specifically, Proosdij et al. examined the effects of sample size and species prevalence (i.e., the proportion of the focal area occupied by that species) on the performance of species distribution models (SDMs). SDMs are widely employed in ecological niche modeling to predict the occurrence of species, given records of species presences and environmental factors that likely correspond to a given specie’s distribution, but it is not well-documented how large of a sample size is required to optimize model performance. The authors compared model performance in a simulated study area (100x100 raster cells along a west-east and north-south gradient with six prevalence classes) and an African study area (179,994 raster cells which excluded large aquatic environments and featured elevational/environmental variables). In both the simulated and real study areas, the presence of a given species for a given raster cell occurred when the environmental bivariate variables (2 axes taken from a principal component analysis that relate to the species most preferred conditions) overlapped with the central region of the bivariate normal density with probability = 68% (broadly, this indicates where conditions would be most favorable for the species). They ran models with numerous sample sizes for each prevalence class, and ultimately found that model performance increased as sample size increased (given constant prevalence) but decreased as prevalence increased (given a constant sample size). Though SDMs are fairly common, the Proosdij et al. study is novel in that it: A) demonstrated a new, rigorous method for determining minimum sample and B) highlighted the importance of identifying minimum sample sizes before attempting to predict species occurrences.

My Thoughts:

Overall, I thought that Preston made a logical, clearly defined argument that was well-substantiated by both his theory and the examples he used, especially in comparison to some of the authors in previous papers. I thought he was very forthcoming in stating some of the inherent weaknesses and possible limitations of his approach, and he generally (although not always) defined the meaning of the variables he introduced in each equation. I also appreciated seeing the applications of the abundance distributions in a variety of taxa and geographical locations. However, I still felt that some of his assumptions had a bias towards temperate ecosystems; while he did include samples and observations from a wide array of “universes”, he did not provide a discussion on how the relative abundances shift when considering tropical ecosystems (or other study areas that have higher numbers of species and much lower numbers of individuals per species). I also thought that in a few instances, he seemed to over-manipulate some of the distributions that didn’t fit the theoretical relationship as closely, which felt like he was moreso “fishing” for a good fit.

I think that the Proosdij et al. paper was overall very rigorous in their methods, and in controlling for any extraneous “noise” or variability by testing both the virtual study area and African study area across six distinct prevalence classes and over 24 sample sizes. However, I would have liked to see as more clear interpretation of some of their statistical values, particularly those that were very specific to this discipline (e.g., the Schoener’s and Hellinger distance). I also found their statement in the conclusion that their method could be applied to “any taxonomic clade or other group, study area and past, current or future climate scenario” to be overly broad – how would we go about applying these results (in terms of minimum constraints for narrow-ranged vs. widespread species on sample size) in the context of a different set of ecosystem/group of species/study area? 

Monday, September 9, 2019

Nicholson and Bailey 1935 & Okuyama 2017

Blog author: Annie Madsen

Citations:

Nicholson, A.J. & Bailey, V.A. 1935. The Balance of Animal Populations. Proceedings of the Zoological Society of London, 105(3):551-598.

Okuyama, T. 2017. Egg limitation and individual variation in parasitization risk among hosts in host-parasitoid dynamics. Ecology and Evolution, 7(9):3143-3148. doi: https://doi.org/10.1002/ece3.2916

Background
Alexander John Nicholson was an Australian entomologist who largely studied population dynamics of insects. He studied zoology at the University of Birmingham and received an honorary D.Sc. from the University of Sydney in 1929. Much of his work regarded the Australian sheep blowfly (Lucila cuprina) and host-parasite interactions. Victor Albert Bailey was a British physicist who worked with population dynamics and ionospheric physics (regarding the transmission of radio waves). He earned his Ph.D. at Queen's College in Oxford in 1923.

Toshinori Okuyama is currently an assistant professor of Entomology at National Taiwan University. His research is self-described as meeting at the intersection of behavioral ecology and population biology and his research focuses mainly on species of jumping spiders.

Nicholson & Bailey
Intro
Nicholson opens with criticisms of previous mathematicians' work describing population dynamics (especially Lotka and Volterra). Main criticisms include former mathematical equations are too broad, do not test specific populations, do not consider competition, and disregard host-parasite interactions.

Section I
General assumptions by the authors included that animals require food, mates, and habitat, searching is always random, animals interact with each other and the environment, animals easily find mates at steady state of population density, and offspring do not search for food because they are born amongst an abundant food source (inside body of host). Animals also had the following properties: areal range, volumetric range, area of discovery, and power of increase.

The authors provided a differential equation describing how intraspecific competition resulted in diminishing returns, which was an improvement on Volterra's model that considered density to increase indefinitely. The authors next acknowledged that individuals likely differ in searching efficiency and that objects differ in how difficult they are to find, which contribute to the dimishining returns of the searching animals and dampened interspecific oscillations in the system. However, they excluded these factors from the model at first for simplicity's sake. Likewise, satiety and hunger may have influenced oscillations but are not considered until later.

Section II
The authors determined the steady state of a closed system for one host and one parasite. Other assumptions here include that egg supply is unlimited and the numbers of encounters is proportional to the number of hosts destroyed and number of offspring produced because these processes are coupled. The steady state is defined as the density of host species that can maintain a density of parasites which only destroy a surplus of hosts. The system is not dynamic under this definition and oscillation of the two species is considered unstable.

The authors also tested several conditions to understand various aspects of life history affect population dynamics, including fecundity, number of times a host can be attacked, juvenile ("fledgling?") success, demographic stochasticity of hosts, and the length and synchrony of the effective period of the parasite and vulnerable period of the host.

Section IV
This section investigates the population dynamics when populations are not at the steady density. The authors roll back the models to only include one host and parasite species again. The populations oscillated at a steady quarter period but the magnitude of the population density became greater over many generations.

These same relationships were then tested by adding more interactions including a specific condition with a hyperparasite and a general parasite with two different hosts. They discussed the oscillation process and were suspicious of the application of their simulations because the increased density magnitude over many generations did not describe what they considered to be actual population dynamics. The relative densities should have oscillated around the steady densities. The two-host system resulted in three different outcomes: oscillation increased, oscillation decreased, and convergence at the steady state.

Section V
The authors claim that continuous interaction with the host must consider both the time delay of food intake and the effects of age distribution (apparently this was Volterra's fatal mistake). With these conditions, the number of parasites needs to be either equal to or greater than the number of species of hosts in order to achieve a steady state.

Okuyama 2017
Egg limitation has been shown to destabilize consumer-resource dynamics, but can interact with individual variation in parasitoid foraging success to restabilize the system. Okuyama investigated a flipped perspective of this: the interaction between egg limitation and individual variation in host parasitization risk.

Okuyama translated the Nicholson-Bailey model into an individual-based model (IBM) to investigate individual variation. Parameters included number of eggs, mortality factor of eggs, host encounter rates, and the number of hosts in each generation. The foraging success of parasitoids was described by a negative binomial distribution and the parasitization risk of hosts was described by a multinomial distribution. Stability was quantified as the persistence of both hosts and parasitoids throughout the simulation.

Persistence and therefore stability in the system was not possible with density-dependent individual variation in foraging success, absence of egg limitation, and absence of variation in host parasitization risk. Egg limitation was shown to destabilize consumer-resource dynamics via the dilution effect (e.g., that parasitoids lose efficiency with increasing host density) but stabilized dynamics when there was variable host risk of parasitization.

Thoughts/Synthesis
The main difference between Okuyama's work and Nicholson & Bailey's foundational paper is the consideration of individual variation in behavior. Nicholson and Bailey assume within the construct of their model that individuals vary, but the variation is negligible at the population level. However, Okuyama tranforms the model by applying a more complex systems  approach where he assumes that the population level processes are emergent from individual behaviors. Necessarily, Nicholson and Bailey (1935) is much more comprehensive because it pioneered the models of population dynamics in the more specific host-parasite system. Without being more immersed in the parasitism literature, I can't say definitively where and when these models and assumptions break down. However, the Nicholson and Bailey models are apparently used in predator-prey systems as well. The more specific cases of superparasitism and the proportional relationship of host encounter to egg success seem like they would break down in these cases, but the general functional response still holds some truth in modern systems.

Friday, September 6, 2019

Gleason 1926 and Cannone et al. 2007

Week 2: Gleason (1926) and Cannone et al. (2007) Blog Post 
Kat Jordan 

Foundations of Ecology Paper: The Individualistic Concept of Plant Association
Gleason, H. A. (1926). The Individualistic Concept of the Plant Association. In L. A. Real & J. H. Brown (Eds.), Foundations of Ecology: Classic Papers with Commentaries(pp. 98–117).
Companion Paper:Unexpected Impacts of Climate Change on Alpine Vegetation
Cannone, N., Sgorbati, S. and Guglielmin, M. (2007). Unexpected impacts of climate change on alpine vegetation.Frontiers in Ecology and the Environment, 5: 360-364.       doi:10.1890/1540-9295(2007)5[360:UIOCCO]2.0.CO;2

Lead Paper and Author Background: 
            Gleason was a botanist who was a contemporary of Clements. He disagreed with Clements climax and organismic metaphor of plant communities. He argued that plant communities are much less uniform than previously thought and attempting to group plants into associations was a much too generalized approach. Instead, Gleason proposed plant communities should be studied as unique areas, some of which may be similar, but no two exactly the same. At the time, his “individualistic theory” was not taken seriously. Gleason ended up moving his area of focus into plant classification.
            The lead author on the companion paper is Dr. Nicoletta Cannone. She is an associate professor at Insubria University and a researcher (in the past) at Ferrara University. Her research focuses on the affect of climate change on cold weather environments (high latitudes and high elevations).  
Gleason paper: 
·     Summary of question: 
            Describe plant association by “…definition, fundamental nature, structure, and classification” (Gleason, 1926) by reevaluating what is known about them. Develop a better conclusion about plant communities to better the understanding of botany as a whole. 
·     Methods:  
            Gleason begins by briefly reviewing what has been done to describe plant communities, and the importance of attempting to detail plant associations. He proposes that, in the past, humans have made jumps in logic (i.e. “abstract extrapolations”) when trying to understand plant communities. To overcome this, Gleason suggests that the facts should be reevaluated. 
·     Summary of results (key ideas): 
o  Two similar areas of vegetation which look reasonably alike would potentially constitute a plant association. However, the degree of uniformity is not as exact as others in the past described. Though two areas may seem similar, no two areas will ever be exactly alike. What degree of difference is the key question? In other words, how much variation is generally permitted in a scientific context. This especially comes up since many studies had focused on northern latitude plant associations and not somewhere tropical were there are generally more species. 
o  Associations gradually blend into one another as you move through them. It is difficult to find the actual boundary. This makes it even more difficult to properly describe where one association begins and another ends. 
o  There are also “sporadic distributions” of plants within associations that are often disregarded because they do not fit within the context of what the association is supposed to be.
o   There can be changes from year to year for vegetation depending on factors such as rainfall and temperature, where there may be increases or decreases in certain species annually. 
o  Succession means that plant communities will eventually be replaced over time. The duration of a plant community can be long or short depending on what factors are impacting it. 
o  It is difficult to come up with an average example of different types of associations because they do differ so much.
o  There is a range of possibilities where different seeds can germinate. Does not have to necessarily be in one strict location. Most plants can withstand a range of different environments.  
o  Hard to predict change in an environment. Migrants (and succession) are always a possibility. 

·     Discussion/Interpretation (key ideas):
o  Environments with similar climate and physiography may contain entirely different associations.
o  The plants found in an area are dependent on the immigrants coming in and the changing of environmental conditions, which work independently.
o   There are no reasons two areas on Earth will look exactly the same. Areas that look similar to each other are a matter of chance and this is subject to change. 
o  Plant associations, then, are when similar environments in the same region apply the same pressures to immigrants and residents, which can make areas look relatively the same. However, an exact, rigid definition of association is not possible since the similar environments can change rapidly and will never be exactly the same.   

Cannone et al. (2007) paper

·     Summary of question:
            What has the change in vegetation been, if any, from the 1950s until the early 2000’s in the European Alps as the climate has become warmer. 

·      Methods:
            The study focused on high elevation areas in the European Alps. A comparison of a phytosociological map from 1953 and 2003 was performed. Three measures (coverage, dynamics, and ecological series) were used to document vegetation changes (in area and elevation), and coverage was further divided into three categories (bare ground, discontinuous vegetation, and continuous vegetation). Dynamics were used to describe succession/ invasion of other plants from different altitudes. Ecological series were broken into seven habitat types. Precipitation and temperature records were also analyzed from two nearby stations and snow cover records were acquired from another local station from 1978 onwards. 

·     Results (key ideas):
o  Climate conditions in the Alps changed between 1953 and 2003, with average air temperatures rising especially after the 1980s. 
o  Precipitation levels fluctuated but generally increased after the 1980s. 
o  Permafrost degradation coincided with change in temperature. 
o  Vegetation increased in coverage, with low altitude plants increasing and the displacement of shrublands at high altitudes.

·     Discussion (key ideas):
o  Expansion of different shrubs increased competition at alpine elevations. 
o  There was also an upwards migration of some alpine grasses. 
o  More precipitation and earlier snow melt could mean more flooding and debris flow. This could mean more bare ground and vegetation rejuvenation. 
o  Permafrost degradation could lead to more landslides and debris flow, affecting the way plant communities can appear. 
o  The study confirmed that vegetation at lower and high altitudes responds quickly and flexibly to changes in climate (even smalls changes such as an increase of 1-2°C). 

My thoughts: 
            I chose the Cannone et al. (2007) paper because I felt it captured the idea Gleason was trying to get across.  Environmental changes, in this case climate change, have transformed plant communities in the European Alps. As seen in Gleason’s paper, changes in environment (and in the migration of species) have caused alpine areas and high latitude areas (nival areas) to change in composition. The Cannone et al. (2007) paper did not mention anything about the Alps appearing visually different from the past. This is probably because of the subjectivity of this idea and because it may be hard for a human to observe a change in 50 years (i.e. a person may not remember what the Alps used to look like). Regardless, observable differences (in maps) can show how plant communities in the Alps are becoming different than they once were. If climate change has continued to the affect the Alps since this paper was published nearly ten years ago, then even more changes can be observed now.

             If the plant associations in the Alps were labelled in the way Clements had done (as we read last week), I am not entirely sure how the new appearance of these plant communities would be discussed. I prefer Gleason’s method of talking about plant communities, though he never concluded on how to describe an association. As a paleontologist, I want to conclude that, in my mind, associations are easier to categorize when looking backwards. This is to say that in hindsight, when looking at the fossil record for example, it may be easier to give an association a label. In a dynamic present, it may be more challenging to properly categorize something, only for it to change again.