Previous Page Table of Contents Next Page


Ray Hilborn

School of Fisheries, Box 357980, University of Washington

Seattle, Washington, USA 98195–7980


Randall M. Peterman

School of Resource and Environmental Management

Simon Fraser University

Burnaby, British Columbia, Canada V5A 1S6


Scientists and decision makers involved in fisheries management will always be faced with uncertainties and risks, yet decisions have to be made. We discuss seven sources of uncertainties and illustrate how these have affected the success or failure of past decisions in fisheries management. We then describe how scientists should incorporate information on uncertainties into the advice given to decision makers by using the formal techniques of decision analysis and statistical power analysis. Despite the limitations of quantitative techniques, these methods are the best way of informing decision makers about the implications of uncertainties in fisheries management, regardless of whether decisions are made in a risk-neutral or a risk-averse, precautionary context. In addition, we discuss the findings of cognitive psychologists on how best to communicate information about uncertainties to managers, user groups, and scientists. Finally, in situations where weak data create large uncertainties, institutional mechanisms that internalize feedback may create incentives for a longer-term viewpoint among harvesters.


The precautionary approach to fisheries management has emerged from several decades of experience with managing fish as well as other natural resources. Management of these resources is typically characterized by large uncertainties, and human activities in the face of such uncertainties have sometimes led to undesirable consequences such as depleted fish stocks or tropical forests that have failed to regenerate due to eroded soils. Obviously, if consequences of management actions were known exactly prior to implementation, there would be no need to take a cautious approach — an appropriate decision could be made without being cautious. However, such perfect knowledge is not possible. Because of the complexity of fisheries systems and their large variability, all forecasts of expected consequences are made with considerable uncertainty. Therefore, the role of scientists in the fisheries management process is to:

  1. provide decision makers with an analysis of the expected consequences of different management actions;

  2. provide them with analyses of the sensitivity of these consequences to various assumptions and input data;

  3. collect data to support these analyses; and

  4. advise decision makers on what data should be collected in the future to improve the understanding of the system so that advice will become more useful.

Often a by-product of carrying out these steps is to help decision makers clarify their objectives and formulate alternative management plans. In cases where management objectives can be quantified, a fifth role of scientists is to help identify which of the contemplated management actions is most likely to meet the objective.

In this paper we concentrate on the problem of providing an analysis of expected consequences of management actions and the robustness of the analysis to different assumptions. This is the most visible role of scientists in the fisheries management process and is where concerns about precaution are most likely to enter. Our other purposes are to review the sources of uncertainty in scientific advice, consider the factors that may contribute to undesirable outcomes of management actions, discuss briefly what can be done to reduce uncertainty, and describe methods for incorporating information about uncertainties into scientific advice to help improve the quality of management decisions.

In this paper we use the terms “caution” and “precaution” to mean the desire to reduce risks. We use “risk” to mean “expected loss,” which is how statistical decision theorists define it (Berger 1985), i.e., the weighted average loss, or the sum of the probability of each potential magnitude of event occurring times the loss if that event occurs. This definition of risk is different from the more common usage of risk, which is “the probability of some undesirable event occurring.”


We identify seven major sources of uncertainty in fisheries stock assessments. These are uncertainty in:

  1. estimates of fish abundance or other measures of the state of the system,
  2. model structure,
  3. estimated model parameters,
  4. response of users to regulation,
  5. future environmental conditions,
  6. future social, political and economic conditions, and
  7. future management objectives.

Estimates of Abundance

One purpose of stock assessments is to estimate abundance. These assessments depend heavily on data (most commonly estimates of catch in weight and often length or age distribution), indices of abundance such as research surveys, estimates of the stock structure, and information about the basic biology of the stock. It is common practice to estimate the reliability of data sources using some measures of variance associated with the sampling scheme — in surveys for instance, the sampling variability of the survey will determine the confidence limits on the survey result. Two of the most common methods for determining abundance, catch-per-unit-effort (CPUE), and virtual population analysis (VPA) are extremely fallible. CPUE is often strongly biased by technological change (increases in catchability with time). VPA rarely provides reliable estimates of current abundance and can be strongly affected by incorrect estimates of the natural mortality rate (Sims 1984, Lapointe et al. 1989), among other errors. Furthermore, many stock assessments have been seriously flawed because catch data were incomplete (biased downward due to under-reporting), the purported index of abundance did not reflect actual abundance, or the stock structure was different from that assumed (number of populations, age distribution, etc.).

Uncertainty in Structure of the Model

Uncertainty in model structure is rarely dealt with explicitly in fisheries stock assessments despite the general recognition that models are not well specified from existing data (e.g., uncertainty about stock structure is often large). Most assessments are based on a single model and uncertainty reported to managers usually refers to uncertainty in the parameters of the model without mentioning how different results might have been had different models been used. Such situations can lead to overconfidence in a decision about an appropriate management action because managers are not told how robust that choice is to different models.

Furthermore, while it is generally recognized that competition and predation are common features of marine population dynamics, most assessment models are based on single-species population dynamics. One notable exception is the multi-species VPA approach used in ICES (International Council for the Exploration of the Sea) for determining natural mortality rates in stock assessments. Unfortunately, the difficulties of obtaining good data have led some scientists to conclude that this method is not very reliable.

Uncertainty in Parameters of the Model

Almost all stock assessments have some form of model at their core and these models have parameters that are estimated with some uncertainty from the data. The commonly used assessment procedures based on age-structured models have parameters for natural mortality rates, growth rates, age-specific selectivities, and stock-recruitment functions. Agencies compute and report the uncertainty in some of these parameters to various extents, although the methods for computing the uncertainty, and the extent and format for reporting this uncertainty, differ greatly between agencies and localities. The methods normally consider variance associated with the data, but do not usually quantitatively address the more serious problem of bias in the data and resulting estimates. This will result in an underestimate of the uncertainty in the parameters of the model. Whereas standard methods only consider variance about the individual data points, the method proposed by Schnute and Hilborn (1993) considers uncertainty about the appropriate variance of the entire data series. Even when uncertainty in parameters is considered, it is common practice to assume that all of these parameters are time invariant. However, some stock assessments allow for temporal changes in some of the parameters (Walters 1987).

Uncertainty in Future Environmental Conditions

It is widely accepted that environmental conditions frequently have a significant impact on fish stocks and therefore any projections of future conditions must make assumptions about future environmental conditions. The simplest assumption is that the environmental conditions will be constant at the historical average. Most commonly stock assessment projections allow for random variability about past average conditions, but in some cases, scientists consider systematic environmental change such as linear trends, periodic changes, or even jumps in conditions (Parma 1990). The probabilities associated with these scenarios are very difficult to estimate.

Uncertainty in Response of Humans to Management Regulations

Most fisheries management activities are directed at people; gear restrictions, fishing seasons, restricted areas and quotas are all regulations on harvesters, not on fish. When scientists provide advice on the expected consequences of actions, they must make assumptions about how harvesters will respond to regulations. The simplest approach is to assume that regulations will not be violated. However, harvesters may change the temporal and spatial distribution of effort in unexpected ways, they may switch the way they use their gear, the species targeted, etc. Changes in regulations may also lead to a change in the rate of bycatch of some species or size classes of fish, discarding, and non-reporting of catches. When these types of responses are not incorporated into stock assessments, then the uncertainty associated with the forecasted outcomes of management actions will be underestimated. More realistic assessments allow for consideration of how well regulations will be obeyed and how the regulations will affect the fishing process (Rosenberg and Brault 1993; Gillis et al. 1995).

Uncertainty in the Future Economic and Social Situation

Equally uncertain is the future of the social and economic system in which the fishery is embedded. In the early 1980s, fisheries on both coasts of Canada were in crisis, prompting a Royal Commission on the west coast and a Task Force on the east coast. In both cases it was found that the crisis was mainly social and economic in origin due to overcapitalization of the fishing fleet, competing user groups, and other non-biological problems, with no major stock collapses or other biological catastrophes (Hilborn 1985). Such problems commonly emerge in many fisheries, yet we know of no assessments that routinely make projections about these social and economic factors and the key social or economic components that affect them such as prices, interest rates, fuel costs, etc.

Uncertainty in Future Management Objectives

Finally, not only are the objectives of management often vaguely defined, but they may change with time. The objective of today may not be tomorrow's objective and we would like to avoid actions today that may adversely affect future objectives. For instance, managers are now trying to rebuild, at considerable expense, the Snake River sockeye salmon population, the first salmon stock placed on the endangered species list in the U.S. However, within the past few decades, the local management authority attempted to eradicate part of this population by poisoning some of its rearing lakes in order to promote a sport fishery on another species (D. Bevan, University of Washington, Seattle, USA, personal communication, 1995). Similarly, whereas many salmon fisheries used to be managed with a focus on yields, now there is more emphasis on maintaining genetic variation among subpopulations because it is known that variation adapts them to local environments (Taylor 1991).

Table 1 summarizes elements that are commonly either included or excluded from analysis of uncertainty in fisheries advice.

It is worth noting that scientific research has generally underestimated uncertainty, even in the relatively well-understood physical sciences. Henrion and Fischhoff (1986) and Freudenberg (1988) have examined the history of parameter estimates in several fields, including measurements such as the speed of light , and found that confidence intervals were frequently too narrow and that subsequent estimates often fell outside of previously published confidence intervals.

Table 1

Components of uncertainty that are either commonly or rarely considered in providing management advice
Source of uncertaintyCommonly consideredRarely considered
Data inputs to assessments: catch, indices of abundance, stock structure, basic biologyPrecision; variance as estimated from internal variability in dataBias
Structure of the modelSingle model, single-species modelsAlternative model structures, multi-species models
Parameters of the modelUncertainty due to data precisionUncertainty due to bias in data or alternative models
Response of users to regulationNot usually consideredChanges in behaviour
Future environmental conditionWhite noise around historical averagePeriodic changes, linear trends, jumps in condition
Response of users to regulationsNot usually consideredChanges in behaviour
Future social or economic conditionsNot usually consideredChanges in prices
Future management objectivesNot usually consideredObjectives that are qualitatively different from current ones


While the purpose of this paper is to highlight uncertainty and its role in management advice, we need to put this into a broader context. There are at least three major causes of failures of fisheries management (defined as reductions of stocks to the point of economic inviability). First, many failures resulted not from uncertainty but rather from institutional inability to implement scientific recommendations. Excess exploitation rates on many North Sea stocks were identified in Beverton and Holt (1957) and yet have persisted, despite repeated scientific advice that lower exploitation rates would lead to higher yields. In another instance, the long decline of the U.S. northeast groundfish stocks came despite repeated warnings from scientific advisors that exploitation rates were too high (Overholtz et al. 1986).

It can be argued in these cases that the user groups and decision makers were uncertain that the benefits of lower exploitation predicted by the scientists would come to be, or at least that the current participants in the fishery would reap these forecast future benefits. Indeed it is hard to believe that if managers and users actually thought that they would be better off by accepting catch restrictions that they would still refuse such restrictions! However, it may be that the continued overexploitation or decline of these fisheries is due to some perverse outcome of game theory or relatively high discount rates.

Second, some “failures” resulted from economic or environmental forces beyond the control of industry or management. The decline in price and high interest rates of the early 1980s drove many fishing firms to bankruptcy because of their previous capital investment decisions. If reliable forecasts of the economic factors were available, it is possible that many such problems could be avoided, or at least such decisions would be made with more complete knowledge of the risks. Environmental changes have also affected productivity of fish populations. For instance, the decline of the California sardine population was due in part to a change in the ocean, and in part to exploitation.

Third, failure to recognize uncertainty and error in stock assessments has unquestionably caused many failures in fisheries management. The recent rapid reduction in abundance of northern cod in Canada appears to have been due in part to errors in assessing the stock size in the 1980s and to predictions of large sustainable yields, which led to the development of an entirely new Canadian offshore trawl fishery (Parsons 1993, Finlayson 1994). In Peru, the dramatic reduction in the anchoveta fishery in the early 1970s was due in part to stock assessment advice that the annual sustainable yield was 7–10 million tonnes. In retrospect scientists recognize that pelagic fisheries such as the anchoveta in coastal upwelling zones are subject to large interannual fluctuations in abundance and survival rates. Thus, it is not appropriate to make management decisions based on analyses that only consider the single, best-fit relationship for such dynamic processes.

Many experienced stock assessment scientists could generate a long list of fisheries failures that were due in part to uncertainties and poor stock assessment advice. The following are common mechanisms that have contributed to these failures.

Mis-specification of regulations due to errors in estimation of abundance

Stocks may be overharvested due to overestimation of stock abundance. In the case of the Canadian northern cod, “…harvest rates in the 1980s greatly exceeded the targeted F0.1 level, largely because of overestimation of stock size … coupled with great uncertainty in abundance estimates derived from research surveys” (Hutchings and Myers 1994). In the early 1990s the assessments were revised and quotas were reduced, but not fast enough and a complete depletion of the spawning biomass followed. The errors in estimation of abundance in the 1980's caused the spawning stock in the early 1990s to be much lower than desired, which set the stage for a variety of factors, including increased discarding, targeting on young fish and fishing outside the EEZ to contribute to reduced abundance.

Mis-estimation of potential yield leading to over-development of capacity

In both the northern cod and Peruvian anchoveta fisheries the forecasts of sustainable yield were much higher than proved to be true. Excess industrial harvesting and processing capacity was built and when it became necessary to reduce catches, the economic influence of this capacity made it impossible for the regulatory agency to reduce catches as fast as was required. This led, in turn, to a more severe decline in stock abundance than would have occurred if the industrial capacity had not been as large. It is difficult, however, to determine the extent to which overoptimistic forecasts contributed to overcapacity in these cases because economic and social forces also often tend to increase fleet capacity with time.

Another related issue, which is more of a management problem than a scientific issue in stock assessment, is that this overcapacity of the fishing fleet makes it more difficult for a regulatory agency to achieve its harvesting goal. For instance, the large fishing power of the eastern North Pacific halibut fleet led to a drastic decrease in the length of the openings to two 12-hour openings per year in recent years from the previous 150-day-per-year openings in 1970 (International Pacific Halibut Commission 1987). In such “knife-edge” situations, an opening that is slightly too long may have devastating effects on the spawning population and future production. There is little margin for error in these situations and therefore one component of taking a precautionary approach is to limit the fishing power in a specific area and time.

Mis-estimation of potential yield leading to continued overexploitation

From about 1950 to the mid-1980s the International Pacific Salmon Fisheries Commission (IPSFC) managed the harvest of Fraser River sockeye salmon in Canada. The IPSFC regulated the fisheries to allow target numbers of fish to spawn in the Fraser River. Some scientists had suggested that the escapements allowed by the IPSFC were too low and that allowing more fish to spawn would produce a significant improvement in total returns. Beginning in the 1980s, the escapements were roughly doubled and during that time the number of fish returning to the river also doubled. Some portion of the increase was due to improved oceanographic conditions but most was due to increased escapements, which suggests that the stock had been harvested at a higher rate than necessary to maximize yields from about 1950 to 1980 (Hilborn, unpublished data).

Overestimation of ability of fish population to withstand fishing pressure, especially in the face of environmental variability

Scientific analyses that fail to fully incorporate data on age structure may tend to overestimate the ability of a fish population to withstand fishing pressure, especially in the face of environmental variability. This is because fishing mortality generally shortens the expected life span of fish and thereby leads to a truncated age distribution, with a smaller proportion of the larger, more fecund individuals remaining than in an unfished population. This effect of fishing tends to decrease the effectiveness of the bet-hedging life history strategy that many long-lived species have evolved in response to highly variable environments for survival of their offspring (Leaman 1991). By spreading out their reproductive effort over many years, individual females are more likely to successfully reproduce large cohorts in such variable environments (Murphy 1968). Thus, after prolonged or intensive harvesting, such long-lived species will be more vulnerable to recruitment failures arising from natural environmental variability. For this reason, models of iteroparous species that do not explicitly include age- or size-specific reproductive rates (e.g., many surplus production models) will underestimate the risks of particular harvesting strategies. The seriousness of this omission depends on the normal life span of the species -- the longer it is, the worse the situation will be.

Failure of scientists to fully communicate the uncertainties of their analyses to harvesters and decision makers

While this is hard to document, most of us are probably aware of cases where scientists provided their “best estimates” of parameter values, stock abundance, or recommended TACs without stating uncertainties associated with these numbers, or at least not presenting those uncertainties in an easily understood form. In some cases, the presentation of only point estimates might have resulted from the decision makers wanting a straightforward answer or explicitly wanting to avoid providing an opening for harvesters to pressure for higher quotas. In any event, after many years of this, such advice with “best estimates” might have acquired more of an air of certainty than was justified, thereby leading to more aggressive management decisions than the population could withstand in some years.


There are many obvious ways to reduce uncertainty in advice provided to managers. One is to do sensitivity analyses with quantitative models to identify research priorities. By ranking needs for new information in this way, research funds can be used efficiently to reduce future uncertainty in stock assessments. Another is to continue to develop more sophisticated quantitative methods for estimating components of stock assessments from data sets. In addition, a less well recognized way to reduce uncertainty is to set up management actions as part of a rigorous experimental design. This will reduce uncertainty about the effectiveness of particular management actions (Walters and Hilborn 1976; Walters 1986). This is because in the past, simultaneous changes in environmental conditions and one or more management actions has made it difficult to uniquely attribute an observed change in a fish population to some hypothesized cause. This has perpetuated uncertainty about the effectiveness of management actions. However, once managers recognize that uncertainty about their future choices of actions can be reduced by taking present actions in some experimentally designed manner, new opportunities emerge. For instance, Sainsbury's (1991) experimental management of a mixed species trawl fishery has provided a clear example of the benefits of taking such an experimental approach. The alternative hypotheses about the mechanisms affecting the fish communities have now been narrowed down considerably as a result of the experiment and appropriate management actions are now much clearer than they were before the experiment began (K. Sainsbury, CSIRO, Hobart, Tasmania, personal communication 1995). Other authors have also demonstrated through quantitative models that benefits can be expected from experimental management in part because of the decrease in uncertainty (reviewed by Peterman and Mc Allister 1993).

However, regardless of how much research and experimental management there is, there will always be uncertainties in scientific analyses. The next section discusses how to deal with these inevitable uncertainties in a systematic, productive manner.


We recommend using decision analysis to deal with uncertainties in fisheries management. This is a comprehensive method that incorporates uncertainties explicitly into making appropriate choices of management actions (Keeney 1982). This method will lead to a more cautious or risk-reducing approach than ignoring uncertainties (where one uses only the best-fit parameters) or dealing with uncertainties in some arbitrary way because decision analysis expressly considers a variety of possible conditions of the stock, fishing powers of the fleet, or whatever other quantities are considered uncertain. To put our recommendations into context, first consider the various ways in which scientists can analyze data and make recommendations.

Levels of Analysis and Presentation

Scientists' advice for managers usually takes the form of predicted outcomes for alternative management actions. Depending upon what kinds of uncertainty are taken into account during the analysis, this advice can be presented in a variety of forms.

Level 1 - No uncertainty is considered in the analysis. The simplest form of management advice is a simple table or graph depicting the expected outcome as a function of the management action chosen. Yield isopleth diagrams are the classic example, although it is now more common practice to provide several indicators of performance such as average catch, average stock size, etc. In the simplest form, no uncertainty is admitted and the indicators represent deterministic projections of the model in use. To some managers this is the preferred form of presentation because explicit statements of uncertainty often provide an opening for aggressive harvesters to push for higher quotas. As well, some managers do not have a systematic method for treating complicated information.

Level 2 - The analysis includes stochastic outcomes for a single model with fixed parameters. The next level of complexity is to admit stochastic variation in future environmental conditions, but no uncertainty in parameter values or structure of the model. In this case the outputs such as annual harvests, average stock abundance, lowest stock abundance, etc. must capture not only the expected (or weighted average) values, but some measure of how variable the outputs would be over time. This variability is often provided as a frequency distribution of each indicator or error bars.

Level 3 - The analysis includes stochastic outcomes for a model that considers various parameter values and/or structural forms of the model. Once we consider the possibility of alternative parameters or models, the advice can take the form of the classic decision table from stochastic decision theory (Table 2), where rows represent alternative management actions and columns represent alternative hypotheses about the parameter values (or possibly model structures). Each cell then represents the outcome if a certain management action is taken and if a certain parameter or model happens to be true. Within each cell, the stochastic nature of the outcomes may be presented as averages or distributions of outcomes. For each alternative indicator, a different table or entry in each cell of the master table will be needed. Two other items are usually included in such presentations, an assessment of the relative probability associated with alternative parameters or models and an expected value (or weighted average outcome) of each management action, found by multiplying the outcome of each combination of action and parameter value by the probability associated with that parameter value. Such outputs integrate across the uncertain parameters or models.

Decision Analysis

The 3rd level of analysis and presentation of results described above is an example of formal decision analysis, which has been used for decades in business (Raiffa 1968) but has only recently started being applied in natural resource management. Uncertainties are found not only in fisheries management but also in other fields that deal with highly variable and difficult-to-measure natural or human systems. As a result, various techniques have evolved in these cases to deal with making decisions under these circumstances, one of which is optimization. However, the complexities of fisheries management situations usually preclude application of formal optimization techniques (see Clark 1985 and 1990 for notable exceptions). A more practical method for fisheries management is decision analysis. This method was originally developed in business to cope with investment decisions being made by private firms in the context of a variable marketplace (Raiffa 1968). Decision analysis is a structured, formalized method to enable analysts to rank proposed actions by quantitatively taking into account the effects of probabilities of uncertain events and the desirability of the potential outcomes (Keeney 1982; Howard 1988). The technique is designed to improve the quality of decision making. Although it cannot guarantee that the “correct” decision will be made each time, the extensive literature on applications of decision analysis shows that it will outperform other approaches to dealing with uncertainty (Raiffa 1968; Keeney 1982). Many of the decision analysis techniques require considerable time and resources, and would likely be implemented for higher valued fisheries on an occasional, rather than an annual basis.

Table 2

Key elements of a decision table
Alternative management actionsAlternative hypotheses about parameter values or modelsExpected Value
Hypothesis 1Hypothesis 2Hypothesis 3
Probability of Hypoth. 1Probability of Hypoth. 2Probability of Hypoth. 3
Option AForecasted outcome A1Forecasted outcome A2Forecasted outcome A3Expected Value of Option A
Option BForecasted outcome B1Forecasted outcome B2Forecasted outcome B3Expected Value of Option B
Option CForecasted outcome C1Forecasted outcome C2Forecasted outcome C3Expected Value of Option C

One purpose of decision analysis is to provide insight into some complex decision problem by breaking the complexity into its constituent parts. Those parts are then reassembled to determine the optimal management action; uncertainties are taken into account explicitly, rather than hidden. The components of a decision analysis are:

  1. a management objective that specifies criteria for ranking contemplated management actions;

  2. a set of alternative management actions to choose from;

  3. alternative states of nature or hypotheses about parameter values or processes;

  4. probabilities for each of those states of nature;

  5. a model (or models) to calculate the consequences of each combination of management action and state of nature;

  6. a decision tree or decision table to systematically lay out the components;

  7. a ranking of management actions after the analysis, and 8) a sensitivity analysis to determine how robust the rank order of management actions is to various assumptions, parameter values, model structure, and management objectives.

Management Objectives

It is not the role of scientists to define management objectives. However, it is often useful for managers to provide clearly defined objectives or goals to scientists so that the scientific analysis can indicate to decision makers how different the recommended management actions might be for different objectives. For instance, when managers are considering various magnitudes of safety margins in setting quotas for harvesting fish, the management objective is extremely important in determining the optimal safety margin. For example, Frederick and Peterman (1995) showed that if the management objective is to maximize the expected long-term yield of the Atlantic menhaden stock off the east coast of North America, then the optimal safety margin for a constant harvest rate policy is about a 12% reduction from the deterministically optimal harvest rate (the one based on the best point estimates of all quantities in the analysis). However, if the management objective is to minimize the probability of annual harvests falling below some minimum level, then the optimal safety margin is about a 20% reduction.

Similarly, the choice of the optimal management action is affected by the degree of risk aversion expressed by managers. However, managers must carefully define what they are risk averse to -- is it low abundance of the fish stock or small commercial harvests? Avoidance of one of these would lead to a different optimal management action than avoidance of the other. In formal decision analysis, the standard way in which risk aversion is taken into account is through a curvilinear utility function, where the value placed on each additional unit of abundance, for instance, decreases with increasing abundance (Keeney and Raiffa 1976).

Experience in ecological modelling over the last 25 years shows that it is not easy for resource managers to clearly state their objectives. There are many indicators and as well there are diverse groups of stakeholders that want to participate more actively in the decision making. In this situation, it is incumbent upon scientific analysts to do a thorough sensitivity analysis to identify the range of objectives over which a given management option is preferred, and the range over which the optimal action differs.

Another important issue in situations where there is large uncertainty, like fisheries management, is that the risks estimated by experts differ from the risks perceived by others, including the public. Slovic (1987) noted that this commonly observed difference can be created by lack of control over exposure to the risk, potential for extreme or catastrophic outcomes, mistrust of experts, etc. The question for managers is, should they make decisions based on what the experts say the risks are or what the non-experts perceive the risks to be? This is further complicated because often managers give more weight to the views of the commercial fishing industry than some scientists believe is appropriate.

Management Options

Since one purpose of decision analysis is to help rank alternative management actions, considerable thought should be put into which options are reasonable and feasible. In the context of the precautionary approach, such options might be various magnitudes of safety margins by which the total allowable catch or other regulatory control would be reduced to allow for uncertainty. However, the optimal safety margin should be estimated for each specific situation rather than setting it arbitrarily at a value such as quota 20% or 30% below the TAC that is otherwise thought to be “best” because such arbitrary estimates can generate suboptimal results (Frederick and Peterman 1995).

Identifying Alternative Parameters or Models

One of the key methodological problems in incorporating uncertainty into assessment advice through decision analysis is identification of alternative parameters or models and quantification of the probabilities of these alternatives.

If we include alternative parameter values or models in an analysis, we must first decide which models to consider and then how to measure the uncertainty associated with the parameters of each model. We must also consider how to assign probabilities to alternative models. When confronted by alternative models, the most common practice is to write a more general model so that each alternative model is a special case of the general model, controlled by a parameter (the 1982 Shepherd stock-recruitment model is a good example, where the Beverton-Holt or Ricker forms fall out as special cases of the more general model). The uncertainty in this parameter of the more general model then reflects the uncertainty about the alternative models. For instance, if there is uncertainty about whether there is depensation in the spawner-recruit relationship, the formal way to assign probabilities to these alternative hypotheses is to consider a model in which no-depensation is a special case and then evaluate the uncertainty regarding the intensity of depensation through a Bayesian analysis.

Assigning Probabilities to Parameters or Models

There are three primary methods used to assign probability distributions to alternative parameter values. First, maximum likelihood is the most traditional method used and involves specifying a likelihood function for the data as a function of the parameters and then computing the likelihood of the data across all combinations of parameters. In models with a few free parameters to be estimated from data, maximum likelihood involves rather straightforward computation, but for complex models with several uncertain parameters, it is much more difficult to estimate the probability distribution across a high-dimensional space. Currently many stock assessment groups that use a maximum likelihood procedure now combine it with some form of Bayesian analysis described below. Maximum likelihood theory can be used to determine traditional confidence intervals without invoking Bayes' theorem, but in order to calculate the probabilities of alternative models or parameters one must, strictly speaking, use Bayesian methods. However, many stock assessments either simply report likelihood-based confidence bounds, or use relative likelihoods of alternative parameters as approximations of Bayes posterior distributions.

Second, bootstrapping is a technique that is commonly used to represent uncertainty in model parameters and structure and requires a number of less rigorous assumptions about the underlying statistical processes that lead to the data. In many cases bootstrapping has proven to be a simpler way of computing a probability distribution similar to the Bayesian method (Mohn 1993). However, bootstrapping makes no claim to represent the probability of alternative hypotheses and results can differ, depending on the assumptions made in the bootstrapping procedure (Smith et al. 1993).

Third, Bayesian estimation uses the basic laws of probability in the form of Bayes theorem to compute the probability that alternative parameter values exist, given the assumptions of their statistical distributions. Bayesian methods have two impediments to successful implementation. First they can be, and often are, very computationally intensive, with solutions often requiring dozens of hours of computer time. More importantly, Bayesian methods also require a specification of “prior knowledge” about parameter values and Bayesian assessments are sometimes strongly affected by what appear to be minor assumptions about these prior values (Adkison and Peterman 1995; Butterworth and Punt 1995). At present both bootstrapping and Bayesian methods are used frequently and there is considerable and lively discussion among specialists over their relative merits.

Whichever method is used, the ultimate objective of maximum likelihood, bootstrap, or Bayesian methods is to assign relative degrees of belief to alternative parameters or models. However, Bayesian methods are the only way (in theory) of placing a probability on the alternative outcomes of each management action.

Model to Calculate Consequences

Another key element of the decision analysis is the model used to forecast the future, i.e., to calculate the consequences of each combination of management action and set of parameter values. These models can be the standard stochastic, age-structured simulation models of fisheries management or simpler forms. However, whatever type of model is used, it must produce indicators of the consequences that appear in the cells of the decision table (Table 2). Those indicators must relate directly to the management objective stated in the initial step of decision analysis.

The “loss function” is an essential characteristic of a model in the context of choosing the appropriate level of precaution. This function describes the losses (or decrease in benefits) that are expected for each level of precaution (e.g., % reduction in the harvest rate below the supposedly optimal value estimated deterministically from the best point estimates of all quantities). The loss function may be derived directly from data or generated indirectly by a complex model. A key general result from the decision analysis literature (Morgan and Henrion 1990) is that the asymmetry in the loss function can have a major influence on how different the optimal decision that takes uncertainty into account is from the deterministic case. For example, Frederick and Peterman (1995) found that in some marine fishes, the loss function appears to be relatively symmetric, in which case a deterministically optimal strategy will perform almost as well as the optimal strategy found by a complete decision analysis that includes uncertainties. However, this was not true in other cases where a depensatory recruitment process was included because this increased the possibility that there would be a large, long-term loss if the stock was overharvested (Frederick and Peterman 1995). In effect, this latter situation created an asymmetric loss function and in that case, large safety margins and a very precautionary approach were warranted (Frederick and Peterman 1995).

Decision Tree or Decision Table to Calculate Ranking of Management Actions

Analysts can combine the elements of a decision analysis discussed above through the step of calculating the weighted average outcomes for each action. That is, each outcome is weighted by the probability assigned to each alternative parameter value or model, as shown in Table 2. These expected values for each action then provide a ranking of the alternative management actions. In more complex situations, where there are several categories of uncertainties or a sequence of decisions, then one must use a decision tree to lay out the calculations (Raiffa 1968). However, the principle of weighting each state of nature by its probability is the same as in a decision table.

Another value of using decision analysis is that it helps to circumvent the problem that arises from the burden of proof being on management agencies. For instance, a common approach to making decisions in fisheries management has been to continue with some “default” harvesting regime unless there was strong evidence that a change was needed. However, with decision analysis, there is no single default action; each alternative action is given equal a priori consideration. This forces managers to identify the best action among a range of actions and the question of “burden of proof” is circumvented.

Sensitivity Analysis

Assessment groups almost universally present some sensitivity analysis to the basic assumptions of their main assessment. For instance, expected consequences of management actions might be presented for changes in assumed natural mortality rate. Sensitivity analysis is a valuable tool for scientists to explore how robust their results are to assumptions. However, assuming that the results were sensitive to a parameter, how would a manager use such results of a sensitivity analysis unless the scientists assigned probabilities to each case? Thus, sensitivity analysis has two types of results. If it can be shown that the results are not sensitive to alternative assumptions (models, etc.), then such alternatives can be ignored. However, if the results are sensitive, then we see no choice except for the scientist to include uncertainty about the parameter (or model structure), assign probabilities to the alternatives, and then carry out the decision analysis described above again.

The critical question, of course, is whether the choice of the “best” policy changes with alternative assumptions. Even if the expected yield, population size, or other indicator variable is sensitive to an assumption, so long as the ranking of the alternative actions is not sensitive to the assumptions, then the managers need to be less concerned with the validity of the assumption.


One way to build confidence in the results of an analysis is to submit it to rigorous external peer review, as many agencies now routinely do. This generates constructive criticism that will lead to improvements in the analyses in the next iteration. In addition, there are several other aspects of building confidence in an analysis.

Validation and Invalidation

Models are the nearly universal tool for formulation of management advice in fisheries and there obviously is concern about the reliability of the models — users seek assurance that the models have been “validated”. However, some authors (Holling 1978) have argued that the term model validation is inappropriate and that instead the process of establishing degrees of belief should be known as model invalidation. In this view, one must explicitly consider a set of alternative models, which may differ only in the set of parameter values used or may differ structurally in the form of one or more hypothesized components. These alternative models are then compared with respect to various features such as descriptions of past observations, ability to forecast well in new situations not included in the input data, etc. The object is to specify the relative degree of belief in the different models. The greatest potential pitfall of “validation” is to assume that once a model has been validated, by whatever method, it is a true representation of nature and to exclude from consideration other alternatives (i.e., place a zero probability on the possibility that those other models exist). This can create a serious problem when scientists arbitrarily choose the range of possible models or parameter values and inadvertently make it too narrow. Adkison and Peterman (1995) describe such a case where Geiger and Koenigs (1991) did a Bayesian analysis to identify the appropriate escapement goal in Alaskan sockeye salmon. Geiger and Koenigs excluded from consideration the range of parameter values that would have been much more consistent with the field data than the range they did consider. This resulted in inappropriately high posterior probabilities on the set of alternative models that they evaluated and also led to unjustified recommendations for the escapement goal (Adkison and Peterman 1995).

The view of Holling (1978) also considers that all models are wrong to some degree. What is important is that a certain model can only be judged against other approaches to using the data. In other words, while any mathematical model may not be correct, managers must ask whether the use of a particular model is likely to be better than an intuitive analysis or “back-of-the-envelope” calculations. We believe that the answer is usually yes, if the model includes an appropriate level of detail relative to the data. Better yet, scientists should develop a range of alternative models for comparison. Once one accepts the stochastic nature of future events and the reality of uncertainty about model structure and parameters, model validation ceases to be an issue and the question is whether the alternative models and parameters considered represent an appropriately broad range and whether the probabilities of alternative models have been assigned using all current knowledge. If so, then the most relevant question for managers is whether the final recommended management action is affected by the range of models considered or the relative weighting put on each.

Limitations of Quantitative Analysis

Decision analysis accounts for uncertainties more comprehensively than most other approaches and it is therefore one of the best ways to identify the best management strategy. However, because of the incompleteness of ecological data, we will still tend to be overconfident in our results from a decision analysis. For instance, if we included a Bayesian analysis of some model and related data, the posterior probability density function will probably be narrower than it would be if more factors were admitted as being uncertain. In other words, the posterior probability density function will become flatter (broader) the more uncertain quantities there are in an analysis, and the optimal decision will be less clear. In the case where we are trying to determine the appropriate level of precaution or safety margin in a fishery, a particular analysis will likely only give the minimum size of that margin; if other uncertainties were included, the margin would probably be larger, as long as they did not introduce a bias. In other words, we may wish to act in an even more cautious manner than indicated by a decision analysis in order to reduce risks. As we gain more experience in admitting uncertainty, we will better understand how the final results depend upon admitting different types of uncertainty.

Hypothesis Testing and Statistical Power

Traditionally, scientists have built confidence in their analyses by applying statistical inference techniques to test some null hypothesis. The result is to either reject or not reject the null hypothesis. Thus, it is important to review the often-neglected issue of statistical power even though, as we have noted, this is not as appropriate of a way to deal with uncertainties as Bayesian decision analysis.

Table 3

Four possible outcomes for a statistical test of some null hypothesis, depending on the true state of nature The probability for each outcome is given in parentheses
 Statistical Decision
State of natureDo not reject null hypothesisReject null hypothesis
Null hypothesis actually trueCorrect (1-&agr;)Type I error (&agr;)
Null hypothesis actually falseType II error (&bgr;)Correct (1-&bgr;)=statistical power

Reprinted from Peterman (1990)

Statistical power is the probability of correctly rejecting a null hypothesis (Table 3; and Dixon and Massey 1983). For instance, if the null hypothesis is that there has been no decrease in recruitment over time, then it is feasible to calculate the probability of rejecting that null hypothesis (at the stated &agr; level) under each conceivable real magnitude of effect, including no effect. One can calculate statistical power, given the &agr;, sample size, and sampling variability, for each postulated magnitude of effect.

Statistical power analysis is best carried out before some monitoring program or stock assessment method is implemented. This a priori power analysis can identify where the experimental design or data collection method should be improved in order to have an acceptably high probability of detecting the effects that managers are concerned about, such as large decreases in recruitment or abundance of the stock. However, most scientists do not calculate power of their methods of stock assessment. This is a serious omission because the large interannual variability in fish populations and the large sampling variance often lead to low power (Peterman 1990). These circumstances have contributed in the past to a high frequency of cases of type II error, where a real effect went unnoticed because the null hypothesis was not rejected and regulatory action was not taken when it should have been (Peterman 1990). This type of error can often be more costly than a type I error, which scientists usually focus on avoiding by setting &agr; at a low value, usually 0.05. A type I error may involve incorrectly concluding that there is a decrease in abundance, for instance, resulting in reduction in fishing time. However, the analogous type II error would involve incorrectly concluding that there is not a decrease in abundance, resulting in no reduction in fishing time. If this lack of regulatory action leads to overexploitation of the stock, then the long-term costs of the type II error may be much larger than the costs of the type I error.

Statistical power analysis should also be done after a statistical inference fails to reject a null hypothesis in order to find how large an effect would have to have been present in order to have an acceptably high power (e.g., 0.8). In many fisheries situations, this “detectable effect size” is unacceptably large in biological or economic terms because of the large variance or small sample size (Vaughan and Van Winkle 1982). Thus, attempts to rely on statistical inference tests without noting the power of those tests to detect important effect sizes may lead scientists and managers to not take regulatory action when they should.

Scientists can do a simple decision analysis to weight these types of outcomes and errors by their probabilities of occurrence in order to choose the appropriate action (Peterman 1990). Table 3 summarizes the probabilities of the four different potential outcomes of a statistical test of some H0. There is a predefined probability, &agr;, of making a type I error and a probability, &bgr;, of making a type II error, as defined by the experimental design (sample size, sample variance, true effect size, and &agr;). The complement of &bgr; (1-&bgr;) is defined as statistical power, or the probability of correctly concluding that some effect exists.

As noted by Peterman (1990), most fisheries scientists and decision makers do not realize that making a decision about some management action as a result of a statistical analysis that fails to reject some null hypothesis automatically implies an assumption about the ratio of costs of type I and type II errors. That assumption may be quite different from the real costs of those errors. In particular, where &bgr; is > &agr;, they assume implicitly that the costs of type I errors exceed those of type II errors if they take action as if H0 were true. For example, suppose that data from a harvested fish stock did not reject the H0 of no decrease in abundance over time at &agr; = 0.05 and that a &bgr; = 0.4 was calculated by statistical power analysis using the sample size, sample variance, and the best estimate of the effect size from current data. Suppose further that decision makers wanted to take the action with the lowest expected cost of an error (expected cost = probability of an event x cost if the event occurs). If the data analysis failed to reject H0 and if they took action to avoid making a type I error (assuming the H0 to be true and allowing fishing to continue at the current intensity), then they would implicitly be assuming that the expected cost of a type II error is less than the expected cost of a type I error. This is demonstrated by solving for the ratio of costs of type II to type I errors, CII/CI, given &agr; and &bgr; and assuming that action was taken as if H0 were true: &agr;CI > &bgr;CII, or &agr;/&bgr; > CII/CI. Since &agr; = 0.05 and &bgr; = 0.4 here, 0.125 > CII/CI, or CI > 8CII. In other words, by taking the action that they did, the managers implicitly assumed that the cost of making a type I error was more than 8 times the cost of a type II error (Peterman 1990). But as noted above, the reverse is more likely type II errors are often more costly than type I errors in fisheries management. Such implied cost ratios of acting on results of statistical tests are rarely reported by scientists, let alone considered by decision makers. If they were, managers might make different decisions.

Thus, statistical power analysis can provide useful information about uncertainties that is relevant to scientists and decision makers. It is a different way of characterizing uncertainty than the components of decision analysis discussed earlier. Statistical power analysis may be particularly appropriate in cases where there is little expertise available for applying the more advanced techniques of Bayesian analysis. We should also note that some scientists prefer to state confidence intervals on parameter estimates to give an indication of their uncertainty, rather than testing hypotheses, but this is not the same as providing a probability or relative degree of belief in each of the alternative values of the parameter (Sokal and Rohlf 1969).


Morgan and Henrion (1990 p. 220) raised an important but often neglected issue when they stated that “… one of the most important challenges of policy analysis [decision analysis here] is to communicate the insights it provides to those who need them.” The “insights” that they refer to include the following.

  1. What is the overall degree of certainty about the conclusions? In other words, how robust is the choice of the recommended action? Is one action the best under 85% of the analyses performed or are any of, say, 7 different alternatives about equally likely to be the best?

  2. How is the optimal decision affected by different assumptions, parameter values, structure of the model, etc.?

  3. Which components of the analysis most affect the recommended optimal action? This will influence priorities for research.

The literature discusses several aspects of effective communication about uncertainties to resource managers, the public, and other scientists. We synthesize these into specific recommendations. First, take the time to provide good documentation. This is one of the least favourite activities of analysts but it is crucial to successful use of information on uncertainties. Good documentation requires the analyst to: (i) decide what the intended audience needs, (ii) decide which subset of information to display, (iii) decide which information to treat deterministically and which to treat in a probabilistic form, (iv) decide which sensitivity analyses are most important to show, and (v) document assumptions, data used, methods, caveats, uncertainties, sensitivity analyses, and their implications. This takes considerable effort but may make the difference between having the analyses used or ignored. Second, show decision makers the implications of uncertainty directly in terms of expected outcomes or optimal actions, rather than just some statement about how uncertain you are about some component parameter of the model. Third, establish a process for iterative interaction among resource managers, the public, and scientists, starting early in the analysis. This might involve a series of workshops with stakeholders and decision makers leading up to meetings where these users interact with the models used in the analysis. This interaction with other people might require considerable programming effort to set up an easy-to-use interface. Fourth, choose appropriate methods for presenting results, again depending on the needs of audience and what has been learned by cognitive psychologists about how people interpret information. For instance, in order to represent uncertainty in some estimated quantity like abundance, scientists are used to showing some normal or skewed probability distribution as a function of discrete intervals of abundance. However, some researchers (Ibrekk and Morgan 1987) found that the cumulative probability distribution is one of the best and most easily understood and interpreted modes of presentation. This is true even for people who have taken university statistics relatively recently and even after some background explanation is given to the test subjects. The cumulative probability distribution is better because users can read directly off the graph the probability that the X variable will be less than some amount.

To our knowledge, relatively little research of this type has been done on how accurately people interpret more sophisticated graphs such as isopleth diagrams or 3-D perspective plots. However, it appears that even some experienced and well-respected scientists cannot easily interpret isopleth diagrams but they fully understand when the same results are shown as families of curves in a single X-Y plot.

The last aspect of communication is that results from sensitivity analyses should focus on how the recommended management action changes as the assumptions or parameter values change. This relates directly to the decision maker's choices.


Fisheries managers and scientists often face situations in which the data are so incomplete that only the most rudimentary quantitative analysis is possible. In such cases, scientists should not present the results as the final answer but instead should emphasize that at best, the analysis might point in a general direction and that future research priorities are a key output at that stage. In these cases with weak data, scientists should be prepared to admit that a decision analysis is not possible or credible.

Instead of a thorough evaluation of management options when data are extremely weak, scientists should emphasize (i) what needs to be done to improve information for future analyses, and (ii) what management actions would be appropriate in the meantime. A three-part approach would lead to improved information. First, if any quantitative analysis has been done on similar fisheries systems, scientists should examine them to determine the most sensitive components. These would then be high priority for collection of new data. Second, scientists should recommend specific monitoring programs to ensure that appropriate data are gathered in a rigorous and usable form. Third, if managers wish to implement preliminary management actions until such time as better data become available, then those actions should be set up as part of an experiment with adequate monitoring (see earlier section on experimental management).

It is less clear what to recommend generally about which management actions to pursue while the above steps are being taken to improve data. One obvious recommendation would be to allow very limited harvesting so as to not overharvest the resource but the tradeoff here is that less information will be gained early about the potential productivity of the stock. An important additional element of this would be to take steps to prevent fishing power from increasing too rapidly. There are few risks associated with slowly developing a fishery, but many risks with rapid development. Slow development might be achieved through reversing the usual burden of proof. Currently, the onus is on many management agencies to show that overharvesting is occurring before they take strong action to regulate the fishing industry. However, because of large uncertainties in the information, particularly in this situation with weak data, such agencies would probably not be able to show such an effect of harvesting until there was a drastic decline in abundance. If instead the burden of proof is placed on industry (Wright 1981; Sissenwine 1986) or a joint industry-government team (Peterman and Bradford 1987) to show that harvesting is not having a detrimental effect on fish populations before the fleet is allowed to increase its cumulative fishing power, this will tend to drastically slow down the rate of development of most fleets. This may enable the increase in knowledge to keep ahead of the increase in fishing power, which will help eliminate some of the problems seen in past fisheries. If the argument is made that such a reversal of the burden of proof would stifle the economic development of certain regions, then we simply ask managers to consider what long-term purpose has been served by allowing the overcapitalization of fleets (and overharvesting of some fish populations) in the past?

Uncertainties in managing natural resources has led to new institutional arrangements in Europe and the United States that are more cautious than traditional approaches (e.g., the Marine Mammal Protection Act, the Michigan Environmental Protection Act, etc.). The Oslo Commission's (OSCOM) Prior Justification Procedure requires industry to go through stringent steps before they are allowed to use the historically common practice of dumping wastes in the ocean (OSCOM 1989). Some situations in fisheries, where there is a long-lived, slow-growing species, for example, are candidates for applying this approach as well because the “permissive” approach (allowing human activities such as harvesting to proceed relatively unchecked until a problem appears) is often not viable due to the usually lengthy period required to identify a problem. Thus,

“The permissive model is no longer viable because it cannot work well in the face of the large uncertainties presently found. Its failure demands a comprehensive rethinking… Instead, the inescapable presence of uncertainty should lead to a shift of the regulatory burden onto those seeking to utilize, and profit from, [natural resources].” (M'Gonigle et al. 1994).


Risk analysis is a generic term for advice to management that considers uncertainty in states of nature. Thus, the methods and issues discussed previously in this paper can be considered as part of risk analysis. Smith et al. (1993) presents a collection of papers from a conference on risk analysis in fisheries management advice. While most of the approaches at that conference used either maximum likelihood or bootstrapping to evaluate the consequences, there was little representation of Bayesian methods or decision analysis.


Some authors have suggested that one way to avoid many of the past problems with overharvesting is to create incentives for users to maintain the resource in the long term. This would involve having them essentially own the resource and be responsible for managing it. This approach may work best for situations such as sessile shellfish, where there may be a relatively stable local community of harvesters. However, aside from the issue of how to initially allocate partial ownership of the resource among interested people, this suggestion still does not get around the problem that Colin Clark (1973) identified. That is, many long-lived resources have such slow rates of growth in abundance that from purely an economic perspective, harvesters would be better off simply harvesting that resource to extinction and putting their short-term earnings into other investments to earn interest at a higher rate than the resource would have generated. They could then move onto some other economic activity. This is especially a problem where major multi-national companies move large operations to new countries when local conditions become unfavourable. Furthermore, the comparison between the benefits of private agricultural farms and privatizing fisheries illustrates the difficulty of maintaining long-term stewardship of the resource. Farming practices in North America have created serious problems with soil erosion as well as depletion of nutrients and productivity of the soils. There is no guarantee that turning fishery resources over to private owners will avoid this same myopic view of the future.

Individual transferable quotas (ITQs) are being used widely now in New Zealand, Australia and Canada as a means of internalizing some of the allocation problems. While these are generally working well, they are not universally successful, with problems of high-grading and illegal fishing common complaints.

Finally, regulations could be structured to give incentives to individuals who harvest in a responsible manner. For instance, there is a serious problem with by-catch of non-target species in the Alaskan groundfish fisheries. In order to reduce by-catch, an incentive for “clean fishing” is being considered. Vessels that have a lower rate of by-catch than other vessels (verified by on-board observers) will be given more fishing time. Captains thus have the incentive to choose geographical locations, depths, and ways of fishing that reduce the bycatch.


Scientific advice to managers will always need to be given with incomplete information. There are now many computational and statistical tools available to incorporate uncertainty into our assessments of expected consequences of alternative actions.

However, one could ask, “Will better management decisions be made and fewer losses occur if the scientific advice takes uncertainty into account?” It depends on how the uncertain information is used. There is a history of debate among fisheries scientists regarding the advisability of providing measures of uncertainty in stock assessments. When users and decision makers are simply provided with a range of potential long-term yields, there is a common tendency for harvesters to pressure managers to choose values towards the high end of the range and for conservation groups to pressure them to choose from the low end of the range. Instead, we recommend that scientists avoid simply presenting a range of potential yields. Instead we should show alternative consequences of alternative hypotheses as well as alternative actions. Thus, instead of saying “The sustainable yield may be between 50 and 100 tons, with our best guess being 75 tons”, the advice should take the form of “There is a 40% chance of being able to take 50 tonnes per year for the next 20 years, a 50% chance of being able to take 75 tonnes per year, and a 10% chance of being able to take 100 tonnes per year”. More fully informed decisions will likely result.

We cannot say for certain whether the methods that we discuss would have prevented the fisheries failures described earlier. However, if a full accounting of uncertainty in the assessments had been made available to managers and users, it is likely that different decisions would have been made. For instance, scientists did not recommend that strong action be taken to control fishing on several North Atlantic and North Sea herring fisheries of the early 1970s until they were convinced that recruitment was decreasing. Because of the large interannual variability in recruitment, this convincing evidence did not appear until the spawning biomass was drastically reduced and severe depletion resulted for several stocks (Saetersdal 1980). It is possible that managers would have reduced fishing mortality earlier if they had been shown the consequences of various management actions in combination with several different biological hypotheses about the state of the stocks.


We thank Milo Adkison and two reviewers, Alec MacCall and J.J. Maquire, for useful comments on the manuscript.


Adkison, M.D. and R.M. Peterman. 1995. Results of Bayesian methods depend on details of implementation: an example of estimating salmon escapement goals. In press, Fisheries Research

Berger, J.O. 1985. Statistical Decision Theory and Bayesian Analysis. Second edition, Springer-Verlag, New York. 617 p.

Beverton, R.J.H. and S.J. Holt. 1957. On the dynamics of exploited fish populations. U.K. Min. Agr., Fish. and Food, Fish. Invest. (Ser. 2), 19:1–533.

Butterworth, D.S. and A.E. Punt. 1995. On the Bayesian approach suggested for the assessment of the Bering-Chukchi-Beaufort Seas stock of bowhead whales. Rep. int. Whal. Comm. 45:303–311.

Clark, C.W. 1973. The economics of overexploitation. Science 181:630–634.

Clark, C.W. 1985. Bioeconomic Modelling and Fisheries Management. John Wiley, New York., 291 pp.

Clark, C.W. 1990. Mathematical Bioeconomics: The Optimal Management of Renewable Resources. 2nd ed., John Wiley, New York.

Dixon, W.J. and F.J. Massey, Jr. 1983. Introduction to Statistical Analysis. Fourth edition, McGraw Hill Book Co., New York. 638 p.

Finlayson, A.C. 1994. Fishing for truth: a sociological analysis of northern cod stock assessments from 1977–1990. Institute of Social and Economic Research, Memorial University of Newfoundland, St. John's Newfoundland, Canada.

Frederick, S.W. and R.M. Peterman. 1995. Choosing fisheries harvest policies: when does uncertainty matter? Can. J. Fish. Aquat. Sci. 52:291–306.

Freudenberg, W.R. 1988. Perceived risk, real risk: social science and the art of probabilistic risk assessment. Science 242: 44–49.

Geiger, H.J. and J.P. Koenigs. 1991. Escapement goals for sockeye salmon with informative prior probabilities based on habitat considerations. Fish. Res. 11:239–256.

Gillis, D.M., R.M. Peterman, E.K. Pikitch. 1995. Implications of trip regulations for high-grading: a model of the behavior of fishermen. Can. J. Fish. Aquat. Sci. 52:402–415.

Henrion, M., and B. Fischhoff. 1986. Assessing uncertainty in physical constants. Amer. J. Physics T. 54:791–797.

Hilborn, R. 1985. Fleet dynamics and individual variation: why some people catch more fish than others. Can. J. Fish. Aquat. Sci. 42: 2–13.

Howard, R. A. 1988. Decision analysis: practice and promise. Manag. Sci. 34:679–695.

Holling, C.S. (ed.). 1978. Adaptive Environmental Assessment and Management. John Wiley & Sons, Chichester, England, 377 pp.

Hutchings, J.A and R.A. Myers. 1994. What can be learned from the collapse of a renewable resource? Atlantic Cod, Gadus morhua, of Newfoundland and Labrador. Can. J. Fish. Aquat. Sci. 51: 2126–2146.

Ibrekk, H. and M.G. Morgan. 1987. Graphical communication of uncertain quantities to non-technical people. Risk Analysis 7(4):519–529.

International Pacific Halibut Commission. 1987. The Pacific halibut: Biology, fishery and management. International Pacific Halibut Commission Tech. Rep. No. 22.

Keeney, R.L. 1982. Decision analysis: An overview. Operations Research 30(5):803–838.

Keeney, R.L. and H. Raiffa. 1976. Decisions with Multiple Objectives: Preferences and Value Trade-offs. John Wiley & Sons Ltd., New York. 450 pp.

Lapointe, M.F., R.M. Peterman and A. MacCall. 1989. Trends in fishing mortality rate along with errors in natural mortality rate can cause spurious time trends in fish stock abundances estimated by Virtual Population Analysis (VPA). Can. J. Fish. Aquat. Sci.46:2129–2139.

Leaman, B.M. 1991. Reproductive styles and life history variables relative to exploitation and management of Sebastes stocks. Environmental Biology of Fishes 30:253–271.

M'Gonigle, R.M., L.Jamieson, M.K. McAllister, and R.M. Peterman. 1994. Taking uncertainty seriously: From permissive regulation to preventative design in environmental decision-making. Osgoode Hall Law Journal 32(1):99–169.

Mohn, R.K. 1993. Bootstrap estimates of ADAPT parameters, their projection in risk analysis and their retrospective pattern. pps 173–184 in Smith, S.J., J.J. Hunt and D. Rivard [eds]. 1993. Risk evaluation and biological reference points for fisheries management. Can. Spec. Pub. Fish. Aquat. Sci. 120.

Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge University Press, 332 pp.

Murphy, G.I. 1968. Patterns in life history and the environment. Am. Nat. 102:391–403.

OSCOM. 1989. Convention for the Prevention of Maritime Pollution by Dumping from Ships and Aircraft. 15th Meeting of the Oslo Commission, “On the Reduction and Cessation of Dumping Industrial Wastes at Sea,” Decision 89/1, Dublin, Ireland, 14 June 1989.

Overholtz, W.J., M.P. Sissenwine and S.H. Clark. 1986. Recruitment variability and its implications for managing and rebuilding the George's Bank haddock stock. Can.J.Fish.Aquat.Sci.43: 748–753.

Parma, A.M. 1990. Optimal harvesting of fish populations with non-stationary stock-recruitment relationships. Natural Resource Modeling 4:39–76.

Parsons, L.S. 1993. Management of the marine fisheries of Canada. Can.Bull.Fish.Aquat.Sci. # 225.

Peterman, R.M. 1990. Statistical power analysis can improve fisheries research and management. Can. J. Fish. Aquat. Sci. 47:2–15.

Peterman, R.M. and M.J. Bradford. 1987. Statistical power of trends in fish abundance. Can. J. Fish. Aquat. Sci. 44:1879–1889.

Peterman, R.M. and M.K. McAllister. 1993. A review of the experimental approach to reducing uncertainty in fisheries management-an extended abstract. Can. Spec.Public. Fish. Aquat. Sci. 120:419–422.

Raiffa, H. 1968. Decision analysis. Addison-Wesley Publ. Co., Reading, MA. 309 p.

Rosenberg, A.A. and S. Brault. 1993. Choosing a management strategy for stock rebuilding when control is uncertain. p. 243–249. In S.J. Smith, J.J. Hunt and D. Rivard [ed.] Risk evaluation and biological reference points for fisheries management. Can. Spec. Publ. Fish. Aquat. Sci. 120.

Sainsbury, K.J. 1991. Application of an experimental approach to management of a tropical multispecies fishery with highly uncertain dynamics. ICES Mar. Sci. Symp. 193:301–320.

Saetersdal, G. 1980. A review of past management of some pelagic fish stocks and its effectiveness. Rapp. P.-v. Reun. Cons. int. Explor. Mer 177:505–512.

Schnute, J.T. and R. Hilborn. 1993. Analysis of contradictory data sources in fish stock assessment. Canadian Journal of Fisheries and Aquatic Sciences 50: 1916–1923

Sims, S.E. 1984. An analysis of the effect of errors in the natural mortality rate on stock-size estimates using virtual population analysis (cohort analysis). J. Cons. int. Explor. Mer 41:149–153.

Sissenwine, M. P. 1986. Councils, NMFS, and the law. pp. 203–211 In: R.H. Stroud (ed.), Multi-jurisdictional Management of Marine Fisheries. Marine Recreational Fisheries, vol. 11. Publ. by National Coalition for Marine Conservation.

Slovic, P. 1987. Perception of risk. Science 236:280–285.

Smith, S.J., J.J. Hunt and D. Rivard. 1993. Risk evaluation and biological reference points for fisheries management. Can. Spec. Pub. Fish. Aquat. Sci. 120.

Sokal, R.R. and F.J. Rohlf. 1969. Biometry: The Principles and Practice of Statistics in Biological Research. W.H. Freeman, San Francisco, 776 pp.

Taylor,E.B. 1991. A review of local adaptations in salmonidae, with particular reference to Pacific and Atlantic salmon. Aquaculture 98:185–207.

Vaughan, D.S. and W. Van Winkle. 1982. Corrected analysis of the ability to detect reductions in year-class strength of the Hudson River white perch (Morone americana) population. Can. J. Fish. Aquat. Sci. 39:782–785.

Walters, C.J. 1986. Adaptive Management of Renewable Resources. MacMillan Publ. Co., New York, 374 pp.

Walters, C.J. 1987. Nonstationarity of production relationships in exploited populations. Can. J. Fish. Aquat. Sci. 44(Suppl.2):156–165.

Walters, C.J. and R.Hilborn. 1976. Adaptive control of fishing systems. J. Fish. Res. Board Can. 33:145–159.

Wright, S.M. 1981. Contemporary Pacific salmon fisheries management. N. Amer. J. Fish. Mgmt. 1:29–40.

Previous Page Top of Page Next Page