# 1. INTRODUCTION

1.1 The decision table
1.2 Elements of a decision analysis
1.3 Review of basic tools

There are two main reasons for conducting quantitative assessments of fish stocks. The first is to determine the current status and productivity of the resource (usually referred to as a stock assessment) and the second is to evaluate the consequences of alternative management actions (usually referred to as a decision analysis).

The results of a stock assessment are usually expressed in terms of the size of the population relative to some target or threshold level, the current and maximum level of sustainable harvest, the probability of the stock increasing under a specific level of harvest, and the values for quantities that summarise the life history characteristics of the population. Examples of the latter include MSYR, the ratio of the Maximum Sustainable Yield (MSY) to the biomass at which MSY is achieved (BMSY), and the steepness (h) of the stock-recruitment relationship (Figs 1.1 and 1.2).

Figure 1.1: Surplus production as a function of biomass. The horizontal line indicates the Maximum Sustainable Yield (MSY). BMSY is the biomass at which MSY is achieved and B0 is the virgin biomass.

A decision analysis involves comparing the implications of different management actions when there are several alternative hypotheses regarding which model best describes the actual situation. It involves the following six steps:

1. identifying the alternative hypotheses about the population and fishery dynamics;

2. determining the relative weight of evidence in support of each alternative hypothesis (expressed as a relative probability);

3. specifying the alternative management actions;

4. specifying a set of performance statistics to measure the consequences of each management action;

5. calculating the values for each performance statistic for each combination of a hypothesis and a management action; and

6. presenting the results to the decision makers.

These six issues will be discussed in some detail in conjunction with an example in the following sections. However, in general, the most difficult step is the second - determining the relative weight of evidence. This is because there are often too few data to discriminate effectively among alternative hypotheses. The bulk of this manual deals with how Bayesian methods can be used to assign probabilities to hypotheses. The most important advantage of Bayesian methods is they provide a framework for assigning probabilities to different hypotheses using information from observations of the situation in question (e.g. age-composition data, trends in catch rate) and from inferences based on other stocks/species.

Figure 1.2: Recruitment as a function of the size of the spawner stock (solid curve) and recruitment needed to keep the population at its current level in the absence of fishing (solid straight line). The definitions of B0, steepness h, and a (the slope of the stock-recruitment relationship at the origin) are indicated.

Walters and Hilborn (1976) proposed that Bayesian analysis could be used to evaluate management actions. However, the first major applications in the context of traditional stock assessment models were by Bergh and Butterworth (1987), who considered an age-structured model with uncertainty in several parameters, and by Sainsbury (1988), who considered six structurally different models. Since then, Bayesian methods have been applied to a broad range of stock assessment problems based on biomass dynamics, age-structured, length-structured, delay-difference, and stock-recruitment models (e.g. Collie and Walters, 1991; Thompson, 1992; Hilborn et al., 1994; McAllister et al., 1994; Walters and Ludwig, 1994; Walters and Punt, 1994; Raftery et al., 1995; Kinas, 1996; McAllister and Ianelli, 1997; Punt and Kennedy, 1997; McAllister and Kirkwood, 1998a, b; Smith and Punt, 1998; Meyer and Millar, 1999a, b; Patterson, 1999).

The material presented in this manual is restricted to that which can be implemented easily using a personal computer. Most of the examples are based on simple biomass dynamics (Hilborn and Walters, 1992) and stock-recruitment (Ricker, 1954) problems. However, it is straightforward (at least conceptually) to extend the examples to the types of situations that are more commonly encountered in practice. The examples and methods concentrate on conducting decision analyses rather than stock assessments. This is because the typical outputs of a stock assessment are calculated as part of a decision analysis (i.e. it is necessary to estimate current biomass to be able to predict future biomass) and because conducting only a stock assessment provides limited help to decision makers in terms of providing information on the likely consequences of different management actions.

The remainder of this Chapter deals in more detail with the six aspects of decision analysis listed above. Some numerical methods for conducting Bayesian analysis are outlined in Chapter 2 and examples using EXCEL are given in Chapter 3. Chapter 4 describes some methods for developing prior distributions while Chapter 5 summarises the advantages and disadvantages of using Bayesian methods to obtain relative probabilities. The manual should be read with the spreadsheets open. This is particularly necessary for this first introductory Chapter.

## 1.1 The decision table

In those cases in which there are only a few (usually less than 10) hypotheses, the results of a decision analysis can be represented in the form of a "decision table". The spreadsheet EX1A.XLS provides a simple example of the use of a decision table to summarise the results of a decision analysis. For this case, the model of the population dynamics is assumed to be known and given by:

(1.1)

where

Bt is the biomass at the start of year t,
B0 is the virgin biomass,
Ct is the catch during year t, and
l is the maximum per-capita rate of increase (the intrinsic growth rate).
It is assumed that the values for B0 and initial biomass, B1, are known and equal to 10,000 tonnes and 5,000 tonnes respectively (cells B6 and B7) and only the intrinsic growth rate is subject to uncertainty. For this example, the annual catch is determined using a "constant harvest rate" strategy, i.e. Ct=kBt where k is the constant harvest rate. In this example, therefore, it is assumed that the biomass can be estimated without error and the harvest rate can be implemented perfectly. Assume that the values for l and k (cells B5 and B10 respectively) were known. Now to calculate, say, the biomass in x years, Ct is replaced by kBt in Equation (1.1) and the model is projected from year t=1 to year t=x (columns B and C rows 26 onward).

The decision table is constructed by performing the above calculation for several combinations of values for l and k, and summarising the results in form of a table. In this example, seven hypotheses regarding the value of the intrinsic growth rate are considered (1.05 to 1.35 in steps of 0.05 - cells F3 to L3) and it is assumed that the seven values are all equally likely (reflected by a relative probability of 1/7 - cells F2 to L2). The 12 alternative management actions (cells E4 to E15) reflect different choices for the annual harvest rate (0 to 0.275 in steps of 0.025).

It is possible to consider many different performance statistics for this problem (see Section 1.2.4). However, for simplicity, only one (the long-term yield - the average catch over the last 10 years of a 300 year forward projection; cell B13) is shown in EX1A.XLS. The choice of 300 years for the calculation of long-term yield is illustrative. The most appropriate choice of projection period depends on the dynamics of the population - in this example, a 200 year projection period would have led to near-identical results. The values of long-term catch for each combination of a hypothesis about l and a choice of harvest rate are shown in cells F4 to L15. These values are calculated using the TABLE function (found in the DATA menu in EXCEL). The expected long-term catch for a given harvest rate is computed by multiplying the long-term catch for each hypothesis about l for that harvest rate by the probability of each hypothesis (cells M4: M15).

There are many ways in which this spreadsheet could be extended. For example, the performance statistic may be long-term biomass instead of long-term yield. This is implemented by changing the reference in cell E3 from B13 (long-term yield) to B14 (long-term biomass). It is straightforward to modify the spreadsheet to deal with a different population dynamics model. For example, spreadsheet EX1B.XLS is the same as spreadsheet EX1A.XLS except that the population dynamics model is assumed to be the Schaefer biomass dynamics model (Schaefer, 1954; 1957).

## 1.2 Elements of a decision analysis

1.2.1 Identifying the alternative hypotheses
1.2.2 Determining the weight of evidence
1.2.3 Specifying the alternative management actions
1.2.4 Specifying the performance statistics
1.2.5 Calculating the values of the performance statistics
1.2.6 Presenting the results to the decision makers

### 1.2.1 Identifying the alternative hypotheses

The choice of hypotheses is usually a question of preference, judgment, resources, experience, and background. In principle, the hypotheses should consist of all plausible structural models combined with all values for the parameters of those models. However, it is clearly not possible to consider all plausible hypotheses because of the need to obtain results reasonably quickly. Therefore, any actual decision analysis will exclude many hypotheses that are clearly plausible by ignoring them, and asserting (implicitly) that they have little or no credibility relative to the models that are considered. Unfortunately, this exclusion of hypotheses will impact the results of the decision analysis, possibly substantially. For example, Adkison and Peterman (1996) illustrative that hypothesis choice within a Bayesian framework can have a major impact on the estimated escapement goals for salmon.

The most common approach is to select a single structural model and to consider only the uncertainty in its parameters (e.g. McAllister et al., 1994; Raftery et al., 1995; Punt and Kennedy, 1997). A more defensible approach is to consider a series of truly different structural models (e.g. Sainsbury, 1988; McAllister et al., 1999; Patterson, 1999). However, apart from being computationally more intensive, it is also very difficult to choose which (of the very many possible) models to consider. An issue related to that of model choice is that of how to determine which model parameters to consider uncertain (rather than be assumed to be known exactly based on auxiliary information).

As noted above, the hypotheses could represent different values for some parameter(s) of a model (e.g. is MSY 200 or 300 tonnes) or relate to a modelling assumption (e.g. does depensation occur at low population size or not). In many instances, however, different modelling assumptions can be represented through different values for some parameter. In such instances, each different modelling assumption therefore represents an alternative model, and all of the alternative models are "nested" within a more general model. For example, the issue of whether recruitment is independent of the spawner stock size or whether it depends on this size could be considered to represent two different modelling assumptions. However, it could just as easily be considered to represent two different hypotheses about the values for the parameters of a Beverton-Holt stock-recruitment relationship (e.g. Thompson, 1993).

It is necessary to consider "uncertainty" when specifying hypotheses because uncertainty is often represented by alternative models of the system. Francis and Shotton (1997) identify the following sources of uncertainty that can be taken into account when conducting a decision analysis:

a) Process uncertainty ("process error") arises from natural variability. The most common example of process uncertainty is variation in recruitment for environmental reasons.

b) Observation uncertainty arises through measurement and sampling error although deliberate mis-reporting (of catches for example) also constitutes a form of observation error.

c) Model uncertainty arises through a lack of understanding of the underlying dynamics of the system being considered.

d) Error structure uncertainty arises from the inability to correctly identify the sources of error when fitting models to data.

e) Implementation uncertainty reflects the implications of the inability to fully implement management actions. This source of uncertainty is increasingly being recognised as being very important in fisheries decision analysis (e.g. Rosenberg and Brault, 1993; Angel et al., 1994; Rice and Richards, 1996).

The above five sources of uncertainty should be considered explicitly when specifying alternative hypotheses. When one of the five sources of uncertainty is ignored, this should be fully documented and rationalised. We strongly recommend careful documentation of how the alternative hypotheses were selected. This is because the choice of hypotheses to consider in a decision analysis can have a large impact on the final outcomes but this cannot be determined from the results of the analysis.

### 1.2.2 Determining the weight of evidence

We now review some methods that have been used historically to assign probabilities to hypotheses, and then explore in detail how this is done in a Bayesian analysis. The simplest method is to select a single model, set its parameters to the values that fit the data "best" and ignore all other models and parameter values. An extension of this approach is to consider several models and assign them equal (Punt and Butterworth, 1991) or unequal but pre-specified (Sainsbury et al., 1997) probabilities. The values for the model parameters are those that fit the data best and no account is taken of parameter uncertainty. Bootstrap or Monte Carlo methods (Francis, 1992; Restrepo et al., 1992; Punt and Butterworth, 1993) calculate frequency distributions for the values of the parameters of pre-specified models. These distributions are then used as if they represented the probabilities of alternative hypotheses.

The Bayesian approach to fisheries stock assessment assigns "posterior" (see Section 1.3.3) probabilities to hypotheses based on formally using the data for the stock under consideration and "prior" information based on subjective judgements and information for other species. It involves the following steps, each of which is dealt with in detail in the rest of this manual:

a) Divide the information available for the assessment into "direct" and "indirect" information. "Direct" information relates directly to the stock in question (e.g. data on trends, etc.) and "indirect" information is based on inferences for other stocks and species or expert opinion (e.g. ideas about the rate of natural mortality based on comparisons with similar species).

b) Identify the range of structural models. These models must be capable of providing model-estimates for all the data to be used in the assessment. This step has been ignored, however, in most current applications and all but a very few Bayesian assessments have been based instead on a single structural model.

c) Identify the parameters for each alternative model.

d) Specify "prior" probability distributions for each of the alternative hypotheses and their parameters using the "indirect" information. The prior distribution summarises the probability for each alternative hypothesis/parameter based on inferences for other stocks or species or from expert opinion.

e) Define the likelihood function for the "direct" information (see Section 1.3.1). This involves constructing likelihood functions for each of the data types and multiplying them together. It is sometimes convenient to use some of the "direct" information in the priors. For example, if there are data on the rate of natural mortality for the species under consideration that is normal with mean 0.1y-1 and standard deviation 0.05yr-1 and the prior based on other species is uniform from 0 to 1, it is best to combine the prior and the data before starting the assessment (using a Bayesian method) and then to use a normal prior for the rate of natural mortality in the actual decision analysis.

f) Find, for each model, the values for the parameters that maximise the likelihood function - this information is often needed to compute the posterior distribution. It is always useful when doing this to check that the same results are obtained irrespective of the starting values chosen for the numerical algorithm used.

g) Update the prior probability distributions using Bayes Theorem (see Section 1.3.3) to obtain the posterior probability distributions.

h) Compute the distributions for each of the performance statistics for each of the management actions.

In general, as much data as possible should be included in the analysis. Note, however, that the Bayesian approach (or any other approach for that matter) cannot overcome the issue of which of a variety of (possibility conflicting) data types should be included in an assessment. The Bayesian approach provides posterior probabilities for alternative hypotheses, not for the reliability of the data set. There are approaches to handling situations in which there are conflicting sources of information (e.g. an increasing catch rate series and declining survey indices of abundance). Either analyses should be conducted for each data source separately and results for each presented to the decision makers (e.g. Richards, 1991; Schnute and Hilborn, 1993) or a more complex model postulated (such as that there are trends over time in catchability or survey bias) so that the data are no longer in conflict.

### 1.2.3 Specifying the alternative management actions

Management actions are generally arrived at through discussion among managers, stakeholders and scientists, and are usually quite clearcut (alternative series of future catches, exploitation rates, size limits, etc.). However, as management systems become more sophisticated, management actions may take the form of feedback-control "decision rules." A decision rule defines a management action as a function of the estimated current status of the stock and perhaps even of the uncertainty about its estimated status (Hilborn and Luedke, 1987; Sainsbury, 1988; Donovan, 1989; Butterworth and Bergh, 1993; Butterworth et al., 1997; Cochrane et al., 1998). Figure 1.3 shows an example of a decision rule in which the rate of fishing mortality (and hence TAC) changes as a function of the estimated depletion[1] of the stock.

Figure 1.3: Relationship between fishing mortality and depletion for a theoretical decision rule.

### 1.2.4 Specifying the performance statistics

Fisheries models can produce many performance statistics such as average catch, variance of catch, average stock size, minimum stock size, probability of collapse, and probability of falling below some threshold level. The average catch provides an indication of the expected yield from the fishery and pertains to the objective of maximizing catches. In contrast, it is often sensible to discount the average catch by some fraction of its standard deviation to reflect the impact of uncertainty (e.g. Quinn et al., 1990). The papers in Smith et al. (1993) and Table 2 of Francis and Shotton (1997) illustrate the diversity of performance statistics presented by scientists. In any specific case, the performance statistics should be chosen to quantify the management objectives. For example, decision analyses in New Zealand have to consider the probability of moving the resource towards BMSY because this is an objective of the fisheries legalisation (Annala, 1993).

Decision analyses often attempt to quantify the "risk" to the resource associated with the alternative management actions and decision analysis has even been described as assessing the "probability of something undesirable happening". However, there is little agreement on how to define this probability quantitatively (e.g. Butterworth et al., 1997; Francis and Shotton, 1997). Most authors have chosen not to specify exactly what processes (such as depensation) could occur at low population size. Instead they have assessed risk in terms of the probability of dropping below some threshold level (usually expressed as some fraction of the virgin biomass). The threshold used most commonly when assessing "risk" is 20% of the average virgin level, B0 (e.g. Beddington and Cooke, 1983; Francis, 1992; Punt, 1995, 1997). However, other levels have been considered (10% B0 - Bergh and Butterworth, 1987; 25% B0 - Hall et al., 1988, Quinn et al., 1990; 54% B0 for baleen whales - Butterworth and Best, 1994). Thompson (1993) provides some theoretical basis for the 20% threshold. For cases in which B0 cannot be estimated (for example, because information about the historical catches is unavailable), it is common practice to define risk as the probability of dropping below the current or the historically lowest biomass (e.g. Punt and Walker (1998)).

### 1.2.5 Calculating the values of the performance statistics

The consequences of a management action, given a specific hypothesis, can be determined analytically for very simple models only (e.g. spreadsheet EX1A.XLS). For most fisheries problems these consequences must be computed by Monte-Carlo simulation. In the example, the aim is to calculate the future biomass, Bt, given the time-series of future catches, Ct, dictated by the management actions and any environmental fluctuations. Spreadsheet EX2A.XLS extends spreadsheet EX1A.XLS to allow for variability in population biomass due to environmental fluctuations. The population dynamics model (Equation 1.1) is extended as follows:

(1.2)

where

sv measures the extent of variation in biomass due to environmental fluctuations (process error), and

et is a random number from a normal distribution with mean 0 and standard deviation 1; et~N(0;12).

The performance statistic is now the expected long-term yield (where the expectation is taken with respect to averaging over the impact of random environmental variation). As an example, we calculated it by conducting 10 population projections for each combination of a hypothesis about l and a harvest rate, calculating the average catch over years 291 - 300 (cells L24 to U24), and averaging the resultant 10 average catches (cell B14). The random numbers required for the population projections, et, are generated using the "normal" option of the Random Number Generation Analysis Tool[2] and stored in cells B333 to K632. Most actual risk analyses are based on 100-1,000 projections, so only 10 projections is clearly insufficient. However, it is straightforward to extend the spreadsheet to increase the number of projections.

A much broader range of performance statistics can be considered if the calculations are based on models that allow for stochastic variation. For example, in addition to the three performance statistics in spreadsheet EX1A.XLS (average catch, average biomass, and minimum biomass), spreadsheet EX2A.XLS also includes the coefficient of variation in annual catch and the probability that the biomass drops below 10% of the virgin biomass (cells B15 and B18). The value for this probability (cell B18) is 0.1 for the choice l=1.15 because one projection out of the ten for this value of l reaches a biomass that is lower than 10% of the virgin biomass.

Projection of the population and hence calculation of performance statistics becomes more difficult when the management action for a given year is determined by a decision rule rather than being a fixed catch or a fixed harvest rate, and hence depends in some way on the state of system in each projection year. For instance, many management policies now used for large-scale industrial fisheries are based on attempting to fix the exploitation rate (for example to F0.1). The catch limit for a given year is obtained by multiplying the estimate of exploitable stock size for that year by the desired exploitation rate. The assessment procedure must be modelled to evaluate this sort of decision rule (e.g. Punt, 1993; McAllister, 1995). This can be very complicated, and, in many cases involving stochastic variation, results in the need for some simplifications or short cuts. One short cut is to assume that the estimated biomass () is distributed with some error about the true biomass (Bt). Spreadsheet EX2B.XLS implements the case in which account is taken of error in estimating abundance when calculating the catch limits. For the case in which management is based on a fixed harvest rate policy, the catch limit is calculated as (cells L28: U328). For this example, where sa determines the extent of assessment error, so where vt is a random variate from N(0;12) (stored in cells L333: U633). This equation can also be used to represent implementation error in the true harvest rate so the harvest rate becomes a random variable (see Section 3.1.2.1). The issue of how to allow for errors in future stock assessments and in implementing management actions more realistically is discussed further in Chapter 3.

### 1.2.6 Presenting the results to the decision makers

When dealing with discrete hypotheses or a small number of values for a single parameter, the decision table format is an effective means of presentation, although multiple tables are required if there is more than one performance statistic. If decision makers formulate a true objective function (Hilborn and Walters, 1992), the results of the decision table can be compressed into the expected utility for each possible management action. An objective function is rarely formulated explicitly and we believe that most fisheries management groups should discuss the trade-offs among the performance statistics. Participants in the decision process often have competing objectives and the most scientists can hope for is to present the consequences of the management actions and let the decision process lead to decisions. In many cases, stock assessment scientists have some role as decision advisors, and should, at the very least, make sure that the decision makers understand the consequences of the management actions.

When there is uncertainty in several parameters, the decision table format is too limiting because it is impossible to express all of the hypotheses as columns in a decision table. We have used several approaches to overcome this problem. The first is simply to present the expected values of the performance statistics for each management action (see cells N4: N15 of spreadsheets EX2A.XLS and EX2B.XLS). An alternative is to select a key uncertainty and aggregate the results for different hypotheses related to this uncertainty (see the worked example in Section 3.1). Irrespective of the means used to summarize the results, care needs to be taken to avoid losing valuable information in the aggregation process. For example, the expected catches in spreadsheet EX2B.XLS suggest that a harvest rate of 10% is appropriate because it leads to the highest expected long-term catch. However, before the decision makers agree on this harvest rate, they should be made aware that there is a risk to be incurred: it is expected that applying a 10% harvest rate will lead to population extinction if the most pessimistic hypothesis about the size of the intrinsic rate of growth is correct.

One way to simplify the set of options to be presented to the decision makers is for the latter to provide "minimum levels of performance". For example, the International Whaling Commission, in evaluating alternative decision rules for commercial whaling, agreed that only candidate decision rules that met certain pre-specified levels of performance in terms of recovery times for overexploited populations would be considered for possible adoption. Any decision rules that did not satisfy this criterion for any hypothesis about the population dynamics were automatically rejected, no matter how well they performed for the other hypotheses. Butterworth et al. (1996) question this approach to decision making because no account is taken of the relative probability of the various hypotheses. Another problem with this approach is that the "minimum levels of performance" need to be chosen carefully to avoid eliminating all of the management actions.

## 1.3 Review of basic tools

1.3.1 Likelihood
1.3.2 The prior distribution
1.3.3 Bayes rule

The concepts of likelihood, prior distribution, and Bayes rule are required for calculating the posterior probability of alternative hypotheses. We provide a very basic and brief overview here. More details on these tools can be found in Hilborn and Walters (1992) and Hilborn and Mangel (1997).

### 1.3.1 Likelihood

The likelihood function, L(D|q), defines the probability that the observed data set (D) would have occurred had a given set of parameter values, q, been true. For the case in which the data are assumed to be independently and identically normally distributed, the likelihood function is given by:

(1.3a)

where

di is the ith data point,
is the model-estimated value of di, and
s is the standard deviation of the observation error.
The choice of likelihood function depends on the assumptions made in the model. For the case in which the data are assumed to be independently and identically log-normally distributed, the likelihood function (ignoring any bias-correction factors) is given by:

(1.3b)

The normal likelihood function could be used if the data were survey estimates of abundance (for which sampling standard deviations are usually available) and the model was a biomass dynamics model, which assumed that the surveys provided absolute indices of the exploitable biomass (e.g. Hilborn et al., 1994; Punt and Hilborn, 1996). The log-normal likelihood is commonly used for catch rate-based indices of relative abundance. In this case, the model-estimate needs to include the constant of proportionality (catchability) that relates exploitable biomass to catch rate.

For the case in which the data are assumed to be independent and identically distributed Poisson random variables, as is often the case for tag-recapture data, the likelihood function is given by the following equation (Hilborn, 1990a).

(1.4)

where

di is the observed number of recaptures in time interval i, and
is the model-estimate of the number of recaptures in time interval i.
For most stock assessment problems, evaluation of L(D|q) involves projecting a population dynamics model forward, using known catches, to predict stock biomasses and then calculating the likelihood for the projection. It is common practice to set the likelihood equal to zero if the population becomes extinct before the most recent year. This removes the possibility of concluding that the population is already extinct. Walters and Ludwig (1994), Hilborn and Walters (1992) and Punt and Hilborn (1996) provide further examples of the likelihood functions used typically for fisheries stock assessment problems.

### 1.3.2 The prior distribution

The prior distribution summarizes the prior probability (relative credibility) of each alternative hypothesis to be examined (be it a value for some parameter or the choice of a structural model). We refer to p(H)- the prior probability for hypothesis H - to make it clear that we are not referring to the probability of different values for model parameters (denoted as p(q)), but rather to the prior probability for the combination of a model and the values for its parameters.

The prior includes the information from all knowledge except the data used in the likelihood calculations of the stock assessment (Punt and Hilborn, 1997). This information can include expert opinion or analyses of data for similar stocks and species. The impact of the prior distribution on the outcomes from a decision analysis depends on how informative the actual ("direct") data for the stock in question are. For cases in which the data are informative, the outcomes should be virtually independent of the choice for the priors. However, for cases in which there are few data, the results may only reflect the (assumed) priors.

### 1.3.3 Bayes rule

The relative probability (referred to as the "posterior probability") for each alternative hypothesis is computed by combining the information in the prior distribution and that in the data using Bayes rule. Bayes rule is written as:

(1.5)

where

p(Hi) is the posterior probability of hypothesis i,

L(D|Hi) is the likelihood of the data set D given hypothesis Hi, and

p(Hi) is the prior probability of hypothesis Hi. for the case in which the alternative hypotheses are considered as a discrete set.

Let's now look at the case where each hypothesis consists of the values for two parameters (say the parameters r and K of the Schaefer model), Bayes rule can be written as:

(1.6)

where

qm,i is the ith value of the mth (1st or 2nd) parameter.
The term marginal posterior is used to refer to the posterior for one parameter where the impact of all other parameters is removed by summing over these parameters. For example, the following formula is used to find the marginal posterior for q1 in the above example:

(1.7)

Extending Equations (1.5) and (1.6) to handle many parameters is straightforward (on paper but not always on a computer[3]). Bayes rule can also be expressed for the case in which the hypotheses are continuous rather than discrete, but this is rarely used in stock assessments. Posteriors for more than one parameter will be referred to as "joint" posteriors. The same approach as was used to find a marginal posterior for q1 can be used to find the joint posterior for two parameters in a problem where there are more than two parameters.

Table 1.1 outlines a simple (but completely fictitious) worked example of the application of Equation (1.6) for the case in which there are two parameters. Tables 1.1(a) and 1.1(b) list the prior distributions for the two parameters while Table 1.1(c) lists the (fictitious) likelihood of the data given the parameters. Table 1.1(d) combines Tables 1.1(a)-(c) to determine the numerator of Equation (1.6). The denominator of Equation (1.6) is found by summing over all the elements of Table 1.1(d). The final outcome of Equation (1.6) is shown in Table 1.1(e).

Table 1.1: Calculation of posterior probability distributions for two parameters A and B.

(a) Prior distribution - parameter A, p(A

 Value 300 400 500 600 700 Probability 0.1 0.2 0.3 0.2 0.2

(b) Prior distribution - parameter B, p(B)

 Value 0.1 0.2 0.3 0.4 0.5 Probability 0.3 0.15 0.1 0.15 0.3

(c) The likelihood function, L(D|A,B)

 A B 300 400 500 600 700 0.1 28.427 4.363 0.636 0.000 0.384 0.2 14.115 2.250 0.176 0.091 0.715 0.3 7.996 1.172 0.021 0.259 1.025 0.4 4.941 0.609 0.004 0.440 1.291 0.5 3.256 0.312 0.045 0.604 1.509

(d) The unnormalised posterior distribution, L(D|A,B)p(A)p(B)

 A B 300 400 500 600 700 0.1 0.853 0.262 0.057 0.000 0.023 0.2 0.212 0.068 0.008 0.003 0.021 0.3 0.080 0.023 0.001 0.005 0.021 0.4 0.074 0.018 0.000 0.013 0.039 0.5 0.098 0.019 0.004 0.036 0.091

(e) The normalised posterior distribution

 A B 300 400 500 600 700 0.1 0.421 0.129 0.028 0.000 0.011 0.2 0.104 0.033 0.004 0.001 0.011 0.3 0.039 0.012 0.000 0.003 0.010 0.4 0.037 0.009 0.000 0.007 0.019 0.5 0.048 0.009 0.002 0.018 0.045

[1] "Depletion" is defined in this manual to be the ratio of the biomass in a given year to that in a virgin state.
[2] Requires that the "Analysis Toolpack" Add-in of Microsoft Excel is installed.
[3] In fact, mostof the rest of this manual involves providing the techniques for doing this.