Previous PageTable Of ContentsNext Page

An example of fitting negative binomial distribution is given in the following. Carpenter bee larvae are common in the inflorescence stalks of soap-tree yucca plants in southern New Mexico. An insect ecologist interested in the spatial patterns of these bees collected a random sample of bee larvae in 180 yucca stalks. These data, summarized in a frequency table, are

x

0

1

2

3

4

5

6

7

8

9

10

fx

114

25

15

10

6

5

2

1

1

0

1

where x is the number of bee larvae per stalk and fx is the frequency of yucca stalks having x = 0, 1, 2, …., r larvae. In this example, r = 10. The total number of sampling units is

N =

= 114 + 25 + …..+ 0 + 1 = 180

and the total number of individuals is

= (0)(114) + (1)(25) + (9)(0) + (10)(1) = 171

The arithmetic mean of the sample is

= 0.95

and the variance is

s2 =

= 2.897

Step 1.Hypothesis: The null hypothesis is that the carpenter bee larvae exhibit a clumped pattern in yucca inflorescence stalks and, hence, agreement (of the number of individuals per sampling unit) with a negative binomial is tested. Since the variance is greater than the mean, a clumped pattern is reasonably suspected.

Step 2.Frequency distribution, fx : The observed frequency distribution, along with its mean and variance, are given previously.

Step 3.Negative binomial probabilities, P(x) : An estimate of obtained using Equation.(6.51) with = 0.95 and s2= 2.897 is

= 0.4635

Since both and are less than 1, Equation (6.50) should be used to estimate . Substituting the values N =180 and N 0 =114 into the left-hand side (LHS) of Equation (6.50) gives the value of 0.1984. Now, substitution of = 0.4635 into the right-hand side (RHS) of Equation (6.50) gives the following :

Iteration 1 :

= 0.2245

Since the RHS is larger than 0.1984, a lower value than 0.4635 for is now substituted into Equation (6.50). Selecting = 0.30 gives,

Iteration 2 :

= 0.1859

This is close to the value 0.1984 (but now lower), so for the next iteration, a slightly higher value of is chosen. Using =0.34 gives

Iteration 3 : = 0.1969

Again, for the next iteration, we try another value of that is slightly higher. For =0.3457,

Iteration 4 : = 0.1984

This is numerically identical to the LHS of Equation (6.50) and, for this example, the best estimate of is 0.3457. Next, using equation (6.49), the individual and cumulative probabilities for finding 0, 1, 2, and 3 larvae per stalk [for =0.95 and =0.3457, where ] are given in Table 6.18.

The cumulative probabilities after finding the probability of 4 individuals in a sampling unit is 94.6%. Therefore, the remaining probabilities, P(5) through P(10) will contribute 5.4%, that is,

P(5+) = 1.0 - 0.946 = 0.054.

Table 6.18. Calculation for P(x), the negative binomial probabilities, for x individuals (bees) per sampling unit (yucca stalks)

Probability

Cumulative probability

=0.6333

 

0.6333

= (0.2535)(0.6333)

=0.1605

 

0.7938

=0.0792

 

0.8730

=0.0454

 

0.9184

=0.0278

 

0.9462

 

=0.0538

1.0000

Step 4. Expected frequencies, Ex : The probabilities are multiplied by the total number of sampling units to obtain the expected frequencies (Table 6.19)

Table 6.19. Calculations for expected frequencies of sampling units containing different number of bees

Expected frequency

Cumulative frequency

E0

=(N)P(0)

=(180)(0.633)

=114.00

114.00

E1

=(N)P(1)

=(180)(0.161)

= 28.90

142.90

E2

=(N)P(2)

=(180)(0.079)

= 14.25

157.20

E3

=(N)P(3)

=(180)(0.045)

= 8.17

165.30

E4

=(N)P(4)

=(180)(0.028)

= 5.00

170.30

E5+

=(N)P(5+)

=(180)(0.054)

= 9.68

180.00

Step 5.Goodness of fit : The chi-square test statistic (c 2) is computed as,

c 2 =

= 0.00 + …+ 0.01= 1.18

This value of the test statistic is compared to a table of critical values of the chi-square distribution with (Number of classes - 3) = 3 degrees of freedom. The critical value at the 5% probability level is 7.82 (Appendix 4), and since the probability of obtaining a value of c 2 equal to 1.18 is well below this, we do not reject the null hypothesis. The negative binomial model appears to be a good fit to the observed data, but we would want further confirmation (e.g., an independent set of data) before making definitive statements that the pattern of the carpenter been larvae is in fact, clumped. Note that when the minimal expected values are allowed to be as low as 1.0 and 3.0 in this example, the c 2 values are 2.6 and 2.5, respectively - still well below the critical value.

Table 6.20. Calculations for c 2 test statistic

Number of bee larvae per stalk

(x)

Observed

frequency fx

Expected frequency Ex

0

114

114.0

0.00

1

25

28.9

0.53

2

15

14.3

0.04

3

10

8.2

0.41

4

6

5.0

0.19

5

10

9.7

0.01

Total

180

180.0

c 2 = 1.18

Other than using statistical distributions for detecting spatial patterns, certain easily computed indices can also be used for the purpose, like the index of dispersion or Green’s index, when the sampling units are discrete.

(i) Index of dispersion : The variance-to-mean ratio or index of dispersion (ID) is

ID = (6.52)

where and s2 are the sample mean and variance respectively. The variance-to-mean ratio (ID) is useful for assessing the agreement of a set of data to the Poisson series. However, in terms of measuring the degree of clumping, ID is not very useful. When the population is clumped, ID is strongly influenced by the number of individuals in the sample and, therefore, ID is useful as a comparative index of clumping only if n is the same in each sample. A modified version of ID that is independent of n is Green’s index (GI) which is computed as,

GI = (6.53)

GI varies between 0 (for random) and 1 (for maximum clumping). Thus, Green’s index can be used to compare samples that vary in the total number of individuals, their sample means, and the number of sampling in the sample. Consequently, of the numerous variants of ID that have been proposed to measure the degree of clumping, GI seems most recommendable. The values of GI for the scale insect population may be obtained as

Since the maximum value of GI is 1.0 (if all 171 individuals had occurred in a single yucca stalk), this value represents a relatively low degree of clumping.

6.3.4. Dynamics of ecosystems

It is well known that forests, considered as ecosystems exhibit significant changes over time. An understanding of these dynamic processes is important both from scientific and management perspectives. One component of these, the prediction of growth and yield of forests has received the greatest attention in the past. However, there are several other equally important aspects concerned with dynamics of forests like long term effects of environmental pollution, successional changes in forests, dynamics, stability and resilience of both natural and artificial ecosystems etc. These different application purposes require widely different modelling approaches. Complexity of these models does not even permit an overview of these models here and what has been attempted is just a simplified description of some of the potential models in this context.

Any dynamic process is shaped by the characteristic time-scale of its components. In forests, these range from minutes (stomatal processes) to hours (diurnal cycle, soil water dynamics), to days (nutrient dynamics, phenology), to months (seasonal cycle, increment), to years (tree growth and senescence), to decades (forest succession) and to centuries (forest response to climatic change). The model purpose determines which of these time-scales will be emphasized. This usually requires an aggregated description of processes having different time-scales, but the level of aggregation will depend on the degree of behavioural validity required.

The traditional method of gathering data to study forest dynamics at a macro level is to lay out permanent sample plots and make periodical observations. More recently, remote sensing through satellites and other means has offered greater scope for gathering accurate historical data on forests efficiently. Without going to complexities of these alternative approaches, this section describes the use of permanent sample plots for long term investigations in forestry and illustrates a forest succession model in a much simplified form.

 

 

(i) Use of permanent sample plots

The dynamics of natural forests is best studied through permanent sample plots. Although the size and shape of plots and the nature and periodicity of observations vary with the purpose of investigation, some guidelines are given here for general ecological or forest management studies.

Representative sites in each category of forests are to be selected and sample plots are to be laid out for detailed observations on regeneration and growth. The plots have to be fairly large, at least one hectare in size (100 m x 100 m), located in different sites having stands of varying stocking levels. It is ideal to have at least 30 plots in particular category of forest for studying the dynamics as well as the stand-site relations. The plots can be demarcated by digging small trenches at the four corners. A map of the location indicating the exact site of the plot is also to be prepared. A complete inventory of the trees in the plots have to be taken marking individual trees with numbered aluminium tags. The inventory shall cover the basic measurements like the species name and girth at breast-height on mature trees (>30 cm gbh over bark) and saplings (between 10 cm 30 cm gbh over bark). The seedlings (<10 cm gbh over bark) can be counted in randomly or systematically selected sub-plots of size 1m x 1m .

Plotwise information on soil properties has to be gathered using multiple sampling pits but composited at the plot level. The basic parameters shall include soil pH, organic carbon, soil texture (gravel, sand, silt and clay content), soil temperature and moisture levels. Observations on topographical features like slope, aspect, nearness to water source etc. are also to be recorded at the plot level.

(ii) A model for forest transition

The model that is considered here is the one known as Markov model which requires the use of certain mathematical constructs called matrices. Some basic description of matrix theory is furnished in Appendix 7 for those unfamiliar with that topic. A first-order Markov model is one in which the future development of a system is determined by the present state of the system and is independent of the way in which that state has developed. The sequence of results generated by such a model is often termed a Markov chain. Application of the model to practical problems has three major limitations viz., the system must be classified into a finite number of states, the transitions must take place at discrete instants although these instants can be so close as to be regarded as continuous in time for the system being modelled and the probabilities of transitions must not change with time. Some modification of these constraints is possible, but at the cost of increasing the mathematical complexity of the model. Time dependent probabilities can be used, as can variable intervals between transitions, and, in higher order Markov models, transition probabilities are dependent not only on the present state, but also on one or more preceding ones.

The potential value of Markovian models is particularly great, but has not so far been widely applied in ecology. However, preliminary studies suggest that, where ecological systems under study exhibit Markovian properties, and specifically those of a stationary, first-order Markov chain, several interesting and important analyses of the model can be made. For example, algebraic analysis of transition matrix will determine the existence of transient sets of states, closed sets of states or an absorbing state. Further analysis enables the basic transition matrix to be partitioned and several components investigated separately, thus simplifying the ecological system studied. Analysis of transition matrix can also lead to the calculation of the mean times to move from one state to another and the mean length of stay in a particular state once it is entered. Where closed or absorbing states exist, the probability of absorption and the mean time to absorption can be calculated. A transient set of states is one in which each state may eventually be reached from every other state in the set, but which is left when the state enters a closed set of states or an absorbing state. A closed set differs from a transient set in that, once the system has entered any one of the states of the closed set, the set cannot be left. An absorbing state is one which, once entered, is not left i.e., there is complete self-replacement. Mean passage time therefore represents the mean time required to pass through a specified successional state, and mean time to absorption is the meantime to reach a stable composition.

To construct Markov-type models, the following main items of information are needed; some classification that, to a reasonable degree, separates successional states into definable categories, data to determine the transfer probabilities or rates at which states change from one category of this classification to another through time and data describing the initial conditions at some particular time, usually following a well-documented perturbation.

As an example, consider the forest (woodland)-grassland interactions over long periods of time in natural landscapes. It is well known that continued human disturbance and repeated occurrence of fire may turn natural forests to grass lands. The reverse shall also occur where grasslands may get transformed to forests under conducive environments. Here, forests and grasslands are identified as two states the system can assume with suitably accepted definitions although in actual situations, more than just two categories are possible.

Table 6.21 gives the data collected from 20 permanent sample plots, on the condition of the vegetation in the plots classified as forest (F) or grassland (G) at 4 repeated instances with an interval of 5 years.

The estimated probabilities for the transitions between the two possible states over a period of 5 years are given in Table 6.22. The probabilities were estimated by counting the number of occurrences of a particular type of transition, say F-G, over a five-year period and dividing by the total number of transitions possible in the 20 plots over 20 years.

 

 

Table 6.21. Condition of vegetation in the sample plots at 4 occasions

Plot number

Occasions

 

1

2

3

4

1

F

F

F

F

2

F

F

F

F

3

F

F

G

G

4

F

F

F

G

5

G

G

G

G

6

G

G

G

G

7

F

F

G

G

8

F

G

G

G

9

F

F

F

G

10

G

G

F

F

11

F

F

F

F

12

G

G

F

F

13

G

G

F

F

14

F

F

G

G

15

F

F

G

G

16

F

F

F

F

17

F

F

G

G

18

F

F

F

F

19

F

F

G

G

20

F

F

F

F

Table 6.22.Transitiona probabilities for successional changes in a landscape (time step=5 years).

Starting

Probability of transition to end-state

state

Forest

Grassland

Forest

0.7

0.3

Grassland

0.2

0.8

Thus plots which start as forest have a probability of 0.7 of remaining as forest at the end of five years, and probability of 0.3 to get converted as grassland. Areas which start as grassland have probability of 0.8 for remaining in the same state and a probability of 0..2 for returning to forest vegetation. None of the states, therefore, are absorbing or closed, but represent a transition from the forest to grassland and vice versa. Where there are no absorbing states, the Markov process is known as ergodic chain and we can explore the full implications of the matrix of transition probabilities by exploiting the basic properties of the Markovian model.

The values of Table 6.22 show the probability of transitions from any one state to any other state after one time step (5 years). The transition probabilities after two time steps can be derived directly by multiplying the one-step transition matrix by itself, so that, in the simplest, two-state case the corresponding probabilities would be given by the matrix:

 

     

     

 
       

=

       

X

       
 

     

     

 

In condensed form, we may write:

P(2) = PP

Similarly, the three-step transition may be written as:

 

     

     

 
       

=

       

X

       
 

     

     

 

or P(2) = P(2)P

In general, for the nth step, we may write:

P(n) = P(n-1)P (6.54)

For the matrix of Table 6.22, the transition probabilities after two time-steps are:

 

0.5500

0.4500

 
 

0.3000

0.7000

 

and after four time-steps are :

 

0.4188

0.5813

 
 

0.3875

0.6125

 

If a matrix of transition probabilities is successively powered until a state is reached at which each row of the matrix is the same as every other row, forming a fixed probability vector, the matrix is termed a regular transition matrix. The matrix gives the limit at which the probabilities of passing from one state to another are independent of the starting state, and the fixed probability vector t expresses the equilibrium proportions of the various states. For our example, the vector of equilibrium probabilities is :

 

0.40

0.60

 

If therefore, the transition probabilities have been correctly estimated and remain stationary, implying that no major changes occur in the environmental conditions or in the management pattern for the particular region, the landscape will eventually reach a state of equilibrium in which approximately 40 percent of the area is forest, and approximately 60 per cent grassland.

Where as in this example, there are no absorbing states, through certain complex calculations, we shall also be able to estimate the average lengths of time for an area of grassland to turn to forest or vice versa under the conditions prevailing in the area i.e., the mean first passage times. Alternatively, if we choose an area at random, what is the average lengths of time we would need to wait for this area to become forest or grassland i.e., the mean first passage times in equilibrium.

 

6.4. Wildlife biology

6.4.1. Estimation of animal abundance

Line transect sampling is a common method used for obtaining estimates of wildlife abundance. Line transect method has the following general setting. Assume that one has an area of known boundaries and size A and the aim is to estimate the abundance of some biological population in the area. The use of line transect sampling requires that at least one line of travel be established in the area. The number of detected objects (si) is noted along with the perpendicular distances (xi) from the line to the detected objects. Otherwise, the sighting distance ri and sighting angle q i are recorded from which xi can be arrived at using the formula x = r sin(q ). Let n be the sample size. The corresponding sample of potential data is indexed by (si, ri, q i , i = 1,..., n). A graphical representation of line transect sampling is given in Figure 6.6.

Figure 6.6. Pictorial representation of line transect sampling

Four assumptions are critical to the achievement of reliable estimates of population abundance from line transect surveys viz., (i) Points directly on the line will never be missed (ii) Points are fixed at the initial sighting position and they do not move before being detected and none are counted twice (iii) Distances and angles are measured exactly (iv) Sightings are independent events.

An estimate of density is provided by the following equation.

(6.55)

where n = Number of objects sighted

f(0) = Estimate of the probability density function of distance values at zero distance

L= Transect length

The quantity f(0) is estimated by assuming that a theoretical distribution like halfnormal distribution or negative exponential distribution fits well with the observed frequency distribution of distance values. Such distributions in the context of line transect sampling are called detection function models. The fit of these distributions can also be tested by generating the expected frequencies and performing a chi-square test of goodness of fit. Alternatively, the observed frequency distribution can be approximated by nonparametric functions like Fourier series and f(0) can be estimated. It is ideal that at least 40 independent sightings are made for obtaining a precise estimate of the density. Details of various detection function models useful in line transect sampling can be found in Buckland et al. (1993).

As an example, consider the following sample of 40 observations on perpendicular distance (x) in metres to herds of elephants from 10 transects each of 2 km in length laid at randomly selected locations in a sanctuary.

32,56,85,12,56,58,59,45,75,58,56,89,54,85,75,25,15,45,78,15

32,56,85,12,56,58,59,45,75,58,56,89,54,85,75,25,15,45,78,15

Here n = 40, L = 20 km. Assuming a halfnormal detection function, an estimate of the density of elephant herds in the sanctuary is obtained as,

= 13.63 herds/ km2

In the case of halfnormal detection function, the relative standard error or alternatively, the coefficient of variation (CV) of the estimate of D is obtained by,

(6.56)

=

= 19.36%

Estimation of home range

Home range is the term given to the area in which an animal normally lives, regardless of whether or not the area is defended as a territory and without reference to the home ranges of other animals. Home range does not usually include areas through which migration or dispersion occurs. Locational data of one or several animals are the basis of home range calculations, and all home range statistics are derived from the manipulation of locational data over some unit of time. Although several methods for estimating home range are reported, generally they fall under three categories viz., those based on (i) polygon (ii) activity centre and (iii) nonparametric functions (Worton,1987) each with advantages and disadvantages. A method based on activity centre is described here for illustrative purposes.

If x and y are the independent co-ordinates of each location and n equals the sample size, then the point () is taken as the activity centre.

(6.57)

Calculation of an activity centre simplifies locational data by reducing them to a single point. This measure may be useful in separating the ranges of individuals whose locational data points overlap considerably.

One of the important measures of home range proposed is based on bivariate elliptical model. In order to estimate the home range through this approach, certain basic measures of dispersion about the activity centre such as variance and covariance may be computed first,

, , (6.58)

and also the standard deviation, and . These basic statistics may be used to derive other statistics such as eigen values, also known as characteristic or latent roots, of the 2 x 2 variance-covariance matrix. Equations of the eigen values are as follows:

(6.59)

(6.60)

These values provide a measure of the intrinsic variability of the scatter of locations along two orthogonal (perpendicular and independent) axes passing through the activity centre.

Although orientation of the new axes cannot be inferred directly from the eigen values, slopes of these axes may be determined, by

b1 (the slope of the principal [longer] axis) = (6.61)

b2 (the slope of the minor [shorter] axis) = (6.62)

The y intercepts together with the slopes of the axes complete the calculations necessary to draw the axes of variability. The equations given by

(6.63)

describe respectively the principal and minor axes of variability.

Consider a set of locational data in which the scatter of points is oriented parallel to one axis of the grid. Then the standard deviations of the x and y co-ordinates (sx and sy) are proportional to the lengths of the principal and minor (or semi-principal and semi-minor) axes of an ellipse drawn to encompass these points. By employing the formula of the area of an ellipse, Ae= p sxsy, we can obtain an estimate of home range size. For the rest of the discussion here, the ellipse with axes of length 2sx and 2sy will be called the standard ellipse. If the principal and minor axes of the ellipse are equal, the figure they represent is a circle and the formula becomes Ac= p r2, where r = sx = sy.

One problem immediately apparent with this measure is that the calculated axes of natural locational data are seldom perfectly aligned with the arbitrarily determined axes of a grid. Hence the values sx and sy upon which the area of the ellipse depends, may be affected by the orientation and shape of the ellipse, a problem not encountered with circular home range models. Two methods are available to calculate values of sx and sy corrected for the orientation (covariance). In the first method, each set of co-ordinates is transformed as follows before computing the area of the ellipse.

(6.64)

and (6.65)

where q = arctan(-b) and b is the slope of the major axis of the ellipse.

A second, far simpler method of determining sx and sy corrected for the orientation of the ellipse uses the eigen values of the variance-covariance matrix derived from co-ordinates of observations. Because eigen values are analogous to variances, their square root also yields values equivalent to the standard deviations of the transformed locational data (i.e., and Although this second procedure is simpler, the trigonometric transformations of individual data points are also useful in ways which will be discussed later.

Another problem concerning the standard ellipse as a measure of home range is that the variances and covariance used in its calculation are estimates of parametric values. As such, they are affected by sample size. If we assume that the data conform to a bivariate normal distribution, incorporation of the F statistic in our calculation of the ellipse allows some compensation for sample size. The formula,

(6.66)

can be used to adjust for the sample size used to determine what has now become a [(1-a )100] percentage confidence ellipse. This measure is supposed to provide a reliable estimate of home range size when locational data follow a bivariate normal distribution. Prior to the incorporation of the F statistic, the calculations presented could be applied to any symmetrical, unimodal scatter of locational data. White and Garrott (1990) has indicated additional calculations required to draw the [(1-a )100] confidence ellipse on paper.

The application of a general home range model permits inferences concerning an animal’s relative familiarity with any point within its home range. This same information can be more accurately determined by simple observation. However, such data are extremely expensive, in terms of time, and it is difficult to make quantitative comparisons between individuals or between studies. Regarding the concept of an activity centre, Hayne (1949) states, "There is a certain temptation to identify the centre of activity with the home range site of an animal. This cannot be done, since this point has not necessarily any biological significance apart from being an average of points of capture". In addition to the activity centre problem just mentioned, there may be difficulties due to inherent departures from normality of locational data. Skewness (asymmetry of the home range) results in the activity centre actually being closer to one arc of the confidence ellipse than predicted from the model, there by overestimating the home range size (the [1-a ]100 confidence ellipse). Kurtosis (peakedness) may increase or decrease estimates of home range size. When the data are platykurtic the home range size will be under estimated. The converse is true of leptokurtic data. The trigonometric transformation of bivariate da ta helps solve this problem by yielding uncorreleated distributions of x and y co-ordinates. However, in order to check if the assumption of bivariate normality is satisfied by the data, one may use methods described by White and Garrott (1990) a description of which is avoided here to avoid complexity in the discussion.

Sample size may have an important effect on the reliability of statistics that have been presented here. It is rather obvious that small sample sizes (i.e., n <20) could seriously bias the measures discussed. A multiple of factors not considered here may also influence the results in ways not yet determined. Such factors as species and individual differences, social behaviour, food sources, and heterogeneity of habitat are some of these.

The steps involved in the computation of home range are described below with simulated data from a bivariate normal distribution with m x = m y = 10, s x = s y = 3, and cov (x,y) = 0 taken from White and Garrott (1990). The data are given in Table 6.23.

Table 6.23. Simulated data from a bivariate normal distribution with m x = m y = 10, s x = s y = 3, and cov(x,y) = 0.

Observation no

x

(m)

y

(m)

Observation no

x

(m)

y

(m)

1

10.6284

8.7061

26

16.9375

11.0807

2

11.5821

10.2494

27

9.8753

10.9715

3

15.9756

10.0359

28

13.2040

11.0077

4

10.0038

10.8169

29

6.1340

7.6522

5

11.3874

10.1993

30

7.1120

12.0681

6

11.2546

12.7176

31

8.8229

13.2519

7

16.2976

9.1149

32

4.7925

12.6987

8

18.3951

9.3318

33

15.0032

10.2604

9

12.3938

8.8212

34

11.9726

10.5340

10

8.6500

8.4404

35

9.8157

10.1214

11

12.0992

6.1831

36

6.7730

10.8152

12

5.7292

10.9079

37

11.0163

11.3384

13

5.4973

15.1300

38

9.2915

8.6962

14

7.8972

10.4456

39

4.4533

10.1955

15

12.4883

11.8111

40

14.1811

8.4525

16

10.0896

11.4690

41

8.5240

9.9342

17

8.4350

10.4925

42

9.3765

6.7882

18

13.2552

8.7246

43

10.8769

9.0810

19

13.8514

9.9629

44

12.4894

11.4518

20

10.8396

10.6994

45

8.6165

10.2106

21

7.8637

9.4293

46

7.1520

9.8179

22

6.8118

12.4956

47

5.5695

11.5134

23

11.6917

11.5600

48

12.8300

9.6083

24

3.5964

9.0637

49

4.4900

10.5646

25

10.7846

10.5355

50

10.0929

11.8786

Step 1. Compute the means, variances and covariance

= 10.14

=10.35

=11.78

= 2.57

= -1.22

= 3.43

= 1.60

Step 2. Calculate eigen values and slopes of axes.

= 11.6434

= 2.7076

Step 3. Compute the and values.

= = 3.4122

= = 1.6455

Step4. Calculate the home range based on F statistic at (1-a ) = 0.95.

.

=

= 114.8118 m2 = 0.0115 ha

 

 

Previous PageTop Of PageNext Page