<p>There are two buzz words in nature management: fragmentation and connectivity. Not only (rail) roads, but also agricultural, residential and industrial areas fragment previously connected (or even continuous) habitat. Common sense tells us that the answer to habitat fragmentation is defragmentation and hence much effort is put into building corridors, of which fauna crossings are just one example. Corridors are conduits connecting two pieces of habitat through an environment of hostile non-habitat. As such, the use of corridors need not be restricted to the animal kingdom; plants can also use them as stepping-stones for their seeds, enabling them to colonize distant habitat. Although corridors may not only act as conduits but also as habitat, filters or even as barriers, in most cases they are constructed primarily for their conduit function. Connectivity is nowadays taken to its extreme in the "Ecologische Hoofdstructuur" (Ecological Main Structure) in The Netherlands. This is a plan in operation to create an extensive ecological structure by connecting a substantial part of the remaining "natural" habitat, which includes conduits of decommissioned farmland bought by the government. Similar plans exist in other parts of the world.</p><p>Needless to say, there are good reasons for building corridors and plans involving them. Yet, there are some valid arguments against connecting everything. The risk of spreading of infectious diseases through these corridors is one of the most prominent arguments. The spread of the effects of (natural) catastrophes such as fire is another. But even when dismissing such negative effects of connectivity, there may be other mitigating measures which are much more efficient (and less expensive) than building corridors. The question whether this is the case and how alternatives should be compared stimulated the work for this thesis.</p><p>A theory that is well suited for predicting the effects of fragmentation is metapopulation theory. As almost every text on metapopulations will tell you, this theory was conceived by Richard Levins in 1969-1970, although its roots may be found in earlier work. The core of the theory is the following observation. Populations are assumed to live in distinct habitat fragments, called patches. These local populations can go extinct relatively quickly, but immigration from other patches can lead to recolonization of empty patches. Thus, the whole population of populations, the metapopulation, can potentially persist if these recolonizations outweigh the extinctions of local populations. In a sense, the population spreads the risk of extinction by spatial separation. The basic model of the theory captures these processes in a simple ordinary differential equation.</p><p>This thesis consists of eight chapters and is divided into four parts. Part I, containing only one chapter, can be regarded as a review of fundamental metapopulation processes, set in the context of a persistent problem in conservation science, the SLOSS problem. This problem, of which the acronym stands for Single Large Or Several Small, raises the question whether the optimal design of a habitat network consists of a single large nature reserve or several small reserves. Although this question was initially concerned with biodiversity (which design can contain the largest number of species?), it can be equally well applied to a single species living in a metapopulation for which the question becomes: which design optimizes the persistence of the species?</p><p>Defined thus, it represents a fine example of opposing processes requiring mathematical modelling. On the one hand, patches must be as large as possible to minimize the risk of local extinction; on the other hand, there must be as many patches as possible to maximize the probability of recolonization and to minimize the risk of simultaneous extinction. Precise mathematical formulation of these thoughts can in principle lead to a solution of the problem. Sometimes the mathematical formulation requires that the question be expressed differently or more clearly. In chapter 1 SLOSS is replaced by the more neutral FLOMS, short for Few Large Or Many Small, because in the chosen framework a single large patch is not really possible (it exists only in a limit).</p><p>Which design is optimal turns out, not completely surprisingly, to depend upon the measure one employs for metapopulation persistence. Two measures are introduced: the metapopulation extinction time and the colonization potential, which is a type of basic reproduction number (the number of patches colonized by a local population during its lifetime in an environment where all other patches are empty). These measures return in subsequent chapters.</p><p>Which design is optimal also depends on how designs with different size and number of patches are compared. In chapter 1 this is done such that the amount of habitat per unit area is constant. This implies that few large patches have larger interpatch distances than many small patches.</p><p>The two measures are functions of the extinction and colonization rates of the metapopulation. Several mechanisms for the extinction and colonization processes are formulated from which the dependence of these rates on patch size is calculated. It turns out that the metapopulation extinction time generally increases with patch size for all mechanisms, which supports the preference of few large patches. However, the colonization potential supports this preference only in the case of some special, rather unrealistic, mechanisms. In many other, more realistic, cases an intermediate patch size exists for which metapopulation persistence measured by the colonization potential is optimal.</p><p>Part II concentrates on the Levins model. Models are often considered inadequate because the underlying assumptions are thought to be unrealistic. Yet, these assumptions can be formulated in a way that is stronger than necessary for the development of the mathematical model. Therefore, they need to be subjected to careful scrutiny, such that all superfluous elements are eliminated. If the model is still discarded, at least it is so for the right reasons.</p><p>In chapter 2 the assumptions of the Levins model are examined. One of the assumptions as it often appears in the literature proves to be too strong: After colonization, the newly born population need not grow to the carrying capacity. It is sufficient if local dynamics are fast enough for a steady population size distribution to be established. It follows that patches need not all have the same extinction and colonization rates, but merely that these form a steady distribution depending on the population size distribution. The extinction and colonization rates in the Levins model are weighted averages over these distributions. Although this does not make the model much more realistic, it does remove restrictions on more realistic extensions of the Levins model. Three such extensions are studied: two extensions in chapter 2, involving the rescue effect and the patch preference effect, and one in chapter 3, dealing with the Allee effect. The first and third are attempts at a more careful and more mechanistic formulation of already existing models. Although the conclusions remain basically the same in these new formulations, they provide more insight in the responsible processes and are scientifically and aesthetically more satisfactory.</p><p>The second extension of the Levins model in chapter 2 incorporates preference for occupied or empty patches in the Levins model. Preference for occupied patches may arise because of conspecific attraction; preference for empty patches seems plausible for territorial species. Preference for empty patches is shown to increase patch occupancy; preference for occupied patches lowers patch occupancy. Chapter 2 briefly studies patch preference and the rescue effect simultaneously as well, because it is not a priori evident how the rescue effect interferes with patch preference. On the one hand, empty patches should be preferred because colonization of empty patches is the only way in which the metapopulation can reproduce. On the other hand, additional colonization of occupied patches prolongs survival of the local population due to the rescue effect. It turns out that the effects are almost additive as far as patch occupancy is concerned. This could, however, be quite different if the metapopulation extinction time is taken as a measure of metapopulation persistence.</p><p>Most metapopulation models, particularly Levins-type models, are only used to study equilibria. The last chapter of part II, chapter 4, deals with non-equilibria and their consequences for metapopulation management, using both the Levins model and its stochastic counterpart. These non-equilibria are created by imposing sudden changes in patch number and the colonization and extinction parameters on systems in equilibrium. One of the most striking results is that if we want to counteract the effects of habitat loss or increased dispersal resistance, the optimal conservation strategy is not to restore the original situation (that is, to create habitat or decrease resistance against dispersal), but rather to improve the quality of the remaining habitat in order to decrease local extinction rate. Optimality here pertains to metapopulation extinction time computed using the stochastic model. Chapter 4 also tells us that using the relaxation time of the deterministic Levins model as a surrogate for the metapopulation extinction time is not always warranted, which is not totally surprising, yet still somewhat disappointing, because the metapopulation extinction time is often hard to compute.</p><p>Chapter 4 forms a bridge between part II and III: it introduces the stochastic approach used in the chapters following it and it already provides us with a rule of thumb for metapopulation conservation as we stated above. In part III rules of thumb that can guide management of metapopulations play a central role. First, in chapter 5, rules of thumb are derived on the abstract level of colonization and extinction probabilities. Then, in chapter 6, some of these rules are tested on the less abstract level of two landscape characteristics which often mainly determine the probabilities of colonization and extinction, viz. patch size and interpatch distance.</p><p>The rules of thumb generated in chapter 5 can be summarized as: to optimize metapopulation extinction time, decreasing the risk of local extinction is preferable over increasing colonization probability and this should generally be done in the least extinction-prone patches; if changing local extinction risk is impossible, then increasing the colonization probability between the two least extinction-prone patches is most preferable. When extinction and colonization are related to patch size and interpatch distance in chapter 6 by mechanistic submodels of the corresponding processes, the last two of these rules transform into: the preferred strategies to optimize the metapopulation extinction time and the basic reproduction number are, firstly, increasing the size of the largest patch (which is least extinction-prone) and, secondly, decreasing the effective interpatch distance between the two largest patches. These rules are less strongly supported than those of chapter 5, and the first is even reversed if absolute (instead of relative) increases in patch size are considered. The reason for this is that in the mechanistic submodel for local extinction a large patch requires a large increase in size to substantially alter its local extinction probability. Since it is not a priori clear whether increases in patch sizes must be compared on an absolute or a relative basis, final conclusions cannot be drawn. Thus, chapters 5 and 6 are two parts of a trilogy which would be completed by a socio-politico-economic chapter taking into account e.g. the costs of habitat creation in relation to the size of the patch to which habitat is added. That is, it would then be almost completed, because there should also be an additional section on the important biological question how ecoducts and the like change the effective interpatch distance; this is usually merely hidden in the parameters. Although the trilogy is not complete, at least more light has been shed on the range of possible final conclusions and, more importantly, the conditions under which they are valid.</p><p>Whereas the first three parts of this thesis deal with general models of hypothetical metapopulations, and are somewhat academic, part IV concentrates on (statistical) methodology assisting in making model predictions, illustrated by two real case studies. Chapter 7 shows how the (relative) impact of human interventions can be predicted despite data of poor quality, for two amphibian species threatened by the reinstatement of an old railway track, using uncertainty analysis. Again, the measure employed, in this case metapopulation extinction time and the occupancy of each local population, plays a crucial role in deciding which scenario of human interventions is most preferable. It is also noted that the optimal scenario may differ for different species which aggravates the decision making process, because species must then be assigned a certain quantity representing their importance. Furthermore, the most important source of uncertainty is not the uncertainty in the effects of the railway track on extinction and colonization, as one might expect, but the uncertainty due to the inherent stochastic nature of the model combined with the uncertainty about the default parameter settings.</p><p>Chapter 8 demonstrates how Bayesian inference using Monte Carlo Markov Chain simulations can help in obtaining (estimates of posterior) probability distributions of meta- population model parameters based on a dataset, typical in metapopulation studies: a few years of data of occupancy (presence or absence) of the tree frog in 202 patches with many missing data. Parameter estimation methods were available before for such datasets (and surely formed a source of inspiration for this new method), but none of them could use all information in the dataset as well as provide a joint probability distribution of the parameters rather than a point estimate. Such a joint probability distribution is necessary for model predictions that take into account the uncertainties about the model parameters. It does take some time to compute, however, so much that it would not have been possible within a reasonable time until recently. Therefore, the appendix of chapter 8 also supplies an efficient algorithm.</p><p>What does this thesis contribute to metapopulation theory and to metapopulation management? Being aware of the fact that I may not be the right person to answer this question, I will endeavor to provide an answer, at the risk of being pretentious.</p><p>As far as metapopulation theory is concerned, I hope to have drawn attention to some underexposed aspects (the necessity of a careful definition of the SLOSS problem and the constant realization that different measures may yield different conclusions). Furthermore, I hope to have shown how existing models may be adjusted to a more satisfactory form that can be more easily extended (by incorporating the rescue and Allee effects into the Levins model). I also hope to have built more solid foundations and intermodel connections (by formulating more precise assumptions of the Levins model and examining the extensions which result when one of these assumptions is violated, by comparing the stochastic and deterministic versions of the Levins model, and by studying different modifications of the discrete-time stochastic model) and to have made some fairly original additions to the theory (patch preference, non-equilibria).</p><p>As far as metapopulation management is concerned, I would be content if due to my work those responsible for metapopulation management thought twice before they decided upon, for example, building an ecoduct. At the same time, I would be disappointed if they followed the rules of thumb mindlessly. Along with many skeptical scientists, particularly biologists, I do not believe that there are rules of thumb upon which can be relied unconditionally. Yet, far from disposing of them altogether, I think they are very important; their value lies in summarizing a large part of our knowledge, the importance of which evidently increases with the robustness of the rules, and in provoking discussions. These discussions already commence in chapters 5 and 6, and are hopefully taken up by others. The discussions should deal with the many assumptions underlying the rules of thumb, when these assumptions are (approximately) valid and when they are clearly violated, and the extent to which such violations entail a change in the rules of thumb.</p><p>Furthermore, I would be pleased if uncertainty analysis of metapopulation model predictions became standard, especially in situations where expert judgment is the most significant source to parameterize a model. I hope that chapter 7 makes clear that there are sophisticated yet easily understandable and implementable techniques. Likewise, I would be satisfied if our Bayesian parameterization method were in vogue, in cases where data are available. With the example of a non-standard incidence function model, I hope to have demonstrated its generality.
|Qualification||Doctor of Philosophy|
|Award date||26 Mar 2002|
|Place of Publication||S.l.|
|Publication status||Published - 2002|
- mathematical models
- nature conservation
- population dynamics