Uncertainty, as defined by the Technical Consultation on the Precautionary Approach to Capture Fisheries (TCPA), is “The incompleteness of knowledge about the state or process of nature” (FAO/Govt. of Sweden 1995). Statistical uncertainty is “stochasticity or error from various sources as described using statistical methodology.” The TCPA defines risk as “the probability of something bad happening.” Note that in decision theoretic terms, risk is defined as the average loss or forecasted loss when something bad happens.
Clearly, when management decisions are to be based on quantitative estimates from fishery assessment models, it is desirable that the uncertainty be quantified, and used to calculate the probability of achieving the desired target and/or risk of incurring undesirable events. The process of communicating this risk to decision-makers is in its early developmental stages and presents substantial challenges to both fishery technicians and managers. In turn, fishery managers and participants must develop means of objectively evaluating the potential costs of undesirable events and define acceptable levels of risk and of risk and of short-term yield which can be foregone to reduce these risks.
Evaluation of and the expected cost of undesirable events which may result from a particular course of action is desirable when proposing management measures (e.g. Beddington 1978; Francis 1991). While this practice has been rare in the past, some failures in the management of well-studied stocks in recent years have brought this issue to the scientific forefront. There have been several recent workshops which have focused on the evaluation of risk in fisheries management (SEFSC 1991, NAFO 1991, Anon. 1992, Smith et al. 1993, Kruze et al. 1993, FAO/Govt. of Sweden 1995).
There are several sources of uncertainty in the calculation of reference points, and in the evaluation of stock status relative to these reference points. Five types of uncertainty arising from an imprecise knowledge of the state of nature are (Rosenberg and Restrepo 1992):
- Measurement uncertainty is the error in the observed quantities such as the catch or biological parameters;
- Process uncertainty is the underlying stochasticity in the population dynamics such as the variability in recruitment;
- Model uncertainty is the misspecification of model structure;
- Estimation uncertainty can result from any, or a combination of the above uncertainties and is the inaccuracy and imprecision in abundance or fishing mortality rate;
- Implementation uncertainty results from variability in the resulting implementation of a management policy, i.e. inability to exactly achieve a target harvest strategy.
Note that sources of uncertainty include not only statistical error in detecting stock status and environmental trends or errors in population analysis, but also wrong decisions and an inefficient management framework; issues dealt with later in this paper.
The potential sources of variability in commonly available data from commercial fisheries -- catch, fishing effort, and biological samples of the catch for length, age and maturity determination -- have been a focus of fishery statisticians and assessment scientists for several decades (e.g. Doubleday and Rivard 1983, ICES Assessment Working Group Reports). These sources are now relatively well known and quantified. Where sample surveys are used, there are standard statistical problems of sample size and representativeness. Difficulty in accounting for discarding continues to bias landing statistics in many fisheries however (Alverson et al. 1994). In logbook and reporting systems there is often misreporting, and in quantifying effort, there are often hidden increases in the fishing power of boats due to fisherman learning and technological change.
While a fishing surveys with a standard boat and randomized design can provide objective estimates or indices of stock size which are less liable to biases than catch data, the resulting survey estimates also have a significant variance (e.g. Doubleday and Rivard 1981). The variance associated with acoustic surveys is usually also considerable (Simmonds et al. 1992). Table 4 includes some estimates of the likely range of errors in various population variables for well-studied offshore fish stocks.
The natural variability associated with fish production systems can be enormous. Environmental variability, the largest source of process errors, usually manifests itself as recruitment variability (e.g. Hennemuth et al. 1980). In short-lived populations this can result in dramatic fluctuations in adult biomass (Lluch-Belda et al. 1989).
Although there may be relationships and patterns which can be statistically modelled to provide some explanation of past events, there has been little success in predicting environmental conditions or the responses of fish populations sufficiently far in the future to be useful to management (Walters and Collie 1988). Thus environmental variability is often treated as entirely stochastic, and the most appropriate approach is usually to measure its effects on the fish population directly with surveys of pre-recruits (Bradford 1992).
In addition to variability on an inter-annual time scale there are also environmental changes on decadal and longer time-scales, which affect population abundance, distribution and location in ways which are difficult to measure, much less predict (e.g. Murawski 1993).
It is generally accepted that fish stocks become more susceptible to environmental variability as exploitation increses. Thus there is a direct effect of management on uncertainty, and the reduction of uncertainty may itself be a management objective.
Model errors are seldom evaluated, because the data required to distinguish among different models are not available. Studies on the relative performance of various model formulations, e.g. Schaeffer and Fox production models, suggest that they may provide substantially different answers using the same data. In fact, the same model may give quite different results depending on the error structure assumed (Polancheck et al. 1993). Other examples of potential model errors in stock assessment include the models used to calibrate indices of abundance (e.g. VPA tuning), the use of conventional constant values for natural mortality, and the setting of F values for the young fish as a function of that for older individuals.
Model error can be examined to some extent by using several models to evaluate the same resource although, in practice, the data and expertise required for a multi-model approach are seldom available. However, in those instances where it has been attempted, substantial differences are often found, for example in the case of bluefin tuna in the west Atlantic three methods gave MSY estimates of 3,942, 5,530 and 6,755 mt/yr (ICCAT 1994).
Owing to the sequential nature of assessment, estimation errors occur at several stages, and are propagated through the process. They can be seen as the combined result of the three types of error outlined above. Although in the past, explicit estimates of accuracy or precision were rare, they are now becoming more common in the literature. Some estimates of the likely order of the error for quantities commonly used in assessing stock status are presented in Table 4 for shelf fisheries. The generality of these examples is open to question, but they are likely to be underestimates for pelagic and large pelagic fish stocks.
Attempts to quantify estimation error use the estimated variability in measured parameters measured. However, it is important to note that several procedures use assumed or unmeasured inputs, for which there is no information on variability. The most significant among these assumed input parameters is natural mortality, which is seldom measured or readily measurable. A recent analysis of haddock in three subareas of the North Sea provided estimates of M ranging from 0.37 to 0.53, considerably higher than the conventional value of 0.2 which is used for North Atlantic groundfish stocks (Jones and Shanks 1990).
Most stock assessments and calculations of target reference points involve a sequence of complex analyses. Inevitably, at each stage there are decisions to be made which may significantly affect the outcome of the analysis. In the absence of a methodology which can provide a unique solution, many management systems adopt a committee approach, in which the result is often arrived at by consensus. For some types of assessment, notably VPA-based assessments, this problem has been dealt with by automating the procedure (Gavaris 1988, Conser and Powers 1990).
Estimation errors which result from biases or trends in input variables may be very difficult to detect or describe. One dramatic example is the systematic error in estimation of stock abundance using sequential population analysis methods (virtual population analysis and cohort analysis). These were only detected when scientists undertook retrospective analyses several years after the population estimates had been used to provide management advice. This could only be done by comparing population estimates for cohorts which had passed almost entirely through the fishery (cohorts for which the SPA has converged and for which estimates are little affected by the inputs values of F for the most recent year), with the estimates of those cohorts at the time when they had recently entered the fishery. Substantial differences between the two estimates, sometimes an order of magnitude or more, were detected.
Owing to the complexity of the assessment process, the causes of the differences found by the retrospective analyses were exceedingly difficult to evaluate, and are still not well understood. They have been variously attributed to misreporting of catch, trends in catchability, the assumption of constant natural mortality across all age groups, and assumptions regarding partial recruitment (the susceptibility to exploitation) at various ages (e.g. Sinclair et al. 1990, Parma 1993). Sinclair et al. (1990) concluded that “estimates of population size from the converged part of the SPA do not necessarily represent the true population size for those years”, thus leaving considerable doubt as to the validity of a methodology which has been the cornerstone of stock assessment in many developed parts of the world.
The values in Table 4 make it clear that the current stock size and fishing mortality are known with relatively low accuracy for most fisheries, although with retrospective analysis, particularly VPA, the above estimates may improve somewhat. Total yield may appear to be known with a higher precision than other variables, but often suffer from high or unknown biases due to discarding and misreporting, particularly if management is by quotas. Survey estimates of biomass typically have a higher variance, but may be less biased, and improve with research investment. In all cases, the relative magnitude of change from year to year will be known with more precision than the absolute value.
|“IN THE ABSENCE OF PRECISE INFORMATION, JUDGMENTALLY PRUDENT TARGETS MAY NEED TO BE ESTABLISHED”|
To date, the focus has been on quantifying the uncertainty associated with estimates of stock status. Reference Points are frequently viewed as point values. The problem is likely to be most severe when the status and target values are estimated using different models and/or data. Ultimately, fishery scientists will need to expand their approaches to take into account the conjoint uncertainty in estimates of stock status and of reference points and, if possible, interactions in the estimation process.
Implementation error is usually regarded as falling outside the scientific component of fisheries management and although very much in evidence, has been little studied (O'Boyle 1993). It is largely the failure to control exploitation by whatever MCS (monitoring, control and surveillance) measures have been adopted. The reasons are many and interrelated, for example, poor surveillance and enforcement, lack of concern by the judiciary when cases are heard, failure of participants to support measures due to lack of opportunity for input during their development or simply disagreement with the measures enforced.
In management systems which a rebased primarily on advice from biological assessments, failure to incorporate, or incorrect incorporation of non-biological information, also contributes to implementation error. These problems may frequently be known to the managers and their technical advisers, but it may be impossible to quantify the uncertainty, except in retrospect.
A workshop to review management of groundfish stocks on the Scotian Shelf off eastern Canada from 1977 to 1993 concluded that implementation error was the primary cause of the failure to conserve stocks (Angel et al. 1994). The workshop noted that “In sum, the tactical approach chosen to control fishing mortality generated illegal behaviour which was not curbed by the available enforcement regime.”
The institutional aspects of implementation error will be considered in greater detail in Section 4.
The most common approach to estimating the effect of variability on the outputs of models which use a variety of input parameters is a ‘propagation of errors’ simulation in which the input parameters are allowed to vary and the variability of the model output is described in probabilistic terms. The two most commonly used methodological approaches to the propagation of errors are Monte Carlo simulations, and resampling techniques such as ‘bootstrapping’. Monte Carlo simulation is the replication of a procedure with input data or parameters drawn randomly from a parametric distribution (sometimes referred to as parametric bootstrapping). Resampling techniques involve the replication of a procedure using input data obtained by sampling from empirical observations (Manly 1991). Smith et al. (1993) provide an overview of ‘bootstrapping’ approaches to identifying and quantifying uncertainties associated with reference points. These procedures provide estimates of the probability density function (PDF) for the outputs, which may be displayed in several ways (Fig. 12)
Monte Carlo simulation is statistically more demanding than ‘bootstrapping’ in that it requires that the error distribution of the input parameters be known. Rice (1993) suggests that nonparametric density estimation methods (Silverman 1986) may also be applicable, particularly in instances where there is doubt as to the extent to which the models misspecify the functional relationships between pairs of historical variables, e.g. spawning stock biomass and recruitment.
Information on variability in model input has been used to assess risk in two ways. First, it has been used to simulate the response of the population to various harvesting strategies with the aim of comparing long-term management approaches (e.g. Ruppert et al. 1985). Second, it has been used to estimate the probability of being at or close to a target point in a given year, and in that light, the probability of exceeding some undesirable limit or threshold (e.g. Mohn 1993). The latter use has been the main recent focus of attention in fisheries management.
Figure 12: Three ways of displaying the information on uncertainty using a probability distribution: (a) the Probability Density Function is most appropriate when the probability of hitting a target is the primary concern; (b, c) the Survivor Probability and Cumulative Probability Distributions are most appropriate when avoidance of a ceiling and floor LRP respectively are the primary concern.
To advise on risk, it is necessary to go beyond the probability of occurrence of particular events, and to quantify the degree to which the events are undesirable; that is, the cost or impact of the event. This requires that the relationship between specific outcomes and the consequent loss of benefits be specified (Anon. 1992). Increasing yield is accompanied by reduced biomass in fisheries, and has associated risks of variability and stock collapse. The simplest risk issue is usually, “How much catch can be taken without reducing the stock to the point where it may fluctuate unacceptably, and/or be unable to replenish itself below 10% (or even higher)”. There are as many other questions regarding risk, as there are management objectives, or combinations of management objectives. As the quantification of risk and the application of decision theory based on risks becomes more formally incorporated into fisheries management, the concept will inevitably be applied to more complex social and economic issues.
At present, one of the main impediments to the evaluation and use of risk in the provision of management advice has been the formal definition of ‘safe’ (acceptable risk). Clearly this should be fishery specific. However, some precautionary generalisations are desirable. For example, although risk is included in the conceptual basis of management of New Zealand Fisheries it has not been formally defined. Francis (1993) proposes a definition in which the level of harvesting should be considered safe if it maintains the spawning stock biomass above 20% of the virgin stock level at least 90% of the time. Definitions of acceptable risk will generally be stated in similar terms has been discussed so far, that levels of risk below 10% (or even higher) will be justified by the available data.
Two categories of risk can be identified: the risk of not achieving a TRP, and the risk of exceeding an LRP (Mace 1994). The costs of not achieving a TRP are usually defined in terms of the short-term reduction or interruption of the flow of benefits to participants in the fishery and consumers, even though this may result in a net gain in the long term. This may also be partly offset by a rise in market prices resulting from a reduction in supply. For species with low natural mortality, most of the yield foregone in one year should be available the following year. For species with high natural mortality, the unharvested biomass will make a contribution through predation to other, possibly commercially valuable, components of the food web.
The costs of exceeding an LRP are much more serious, and have been discussed earlier, and range from stock decline to collapse, impacts on associated species and ecosystem destabilization, long term loss of earnings, including intergenerational impacts. If the conditions for safe harvesting can only be met by expenditures on research, management and enforcement that exceed the net rent likely to be provided by the resource, other less costly approaches to management, such as intermittent harvesting (pulse fishing or culling) under close supervision, or rotation of fishing among fishing areas should be considered.
Consistent with the two categories of risk described above are two types of management error which may arise due to uncertainty about the current status of the stock (Rosenberg and Restrepo 1995). The terms Type I and Type II error are adopted from standard statistical usage. Type I error occurs when the scientist erroneously advises the manager that overfishing is taking place. Type II error occurs when the scientist erroneously concludes that the stock is underfished. As indicated above, the consequences of a bias towards Type II errors are more serious than those of Type I error.
A management framework that invokes preset actions when one or more (whether qualitative or quantitative) LRPs have been exceeded is in effect a precautionary approach.
One context for this approach is analogous to a thermostat (Die and Caddy in press): the fishery operating under strict access control, is not subject to a catch target or limitation, but once one, or a series of LRPs, show evidence of overexploitation, a pre-established management action is triggered which reduces the fleet effort. This is maintained or reinforced until the resource recovers to a pre-agreed level, when exploitation rate may be increased slightly.
There can also be risks due to unforseen biological interactions which are beyond the scope of management control, but nonetheless affect the fishery. For example, the invasion of Norwegian waters by harp seals which in 1987 and 1988 were estimated to have consumed 325,000 ± 75,000 t of cod and saithe, produced a sudden decline in three year classes (Ugland et al. 1993). Such environmental and biological events, even though not necessarily caused by fishing, will need to be predicted where possible, monitored, and taken into account in relation to any biomass-based LRP.
There are no standard methods for communicating uncertainty and risk to fishery decision-makers (Rosenberg and Restrepo in press). Basic statistics provide a variety of means of communicating variability which can be used to indicate the uncertainty associated with a particular estimate, or the probability of occurrence of an undesirable event. Probability density functions are most frequently used to communicate the variability associated with a mean when the observations are normally distributed. For fishery management purposes, cumulative probability or cumulative survivor distributions may be more useful when the aim is to estimate the probability of avoiding and upper or lower limit (Fig. 12). Nonparametric methods based on percentiles or quartiles, and the use of box plots may be more appropriate when distributions are skewed.
The method for communicating uncertainty and risk to managers will depend on their level of technical sophistication. In most developing country situations, it will be important to relate the uncertainty to characteristics of the fishery which are well known, e.g. an amount of catch, rather than an F level. A simple graphical presentation was used for flyingfish in the eastern Caribbean as a means of communicating trends in yield, catch rates, their variability and the increased probability of undesirable events with increasing F (Fig. 13).
For many valuable stocks, attempts to quantify uncertainty and risk will be justified. For the northern cod fishery of Newfoundland and Labrador, Restrepo et al. (1992) used the Monte Carlo approach to investigate risks associated with quota management. Using 1000 simulation runs they found that, if at the end of 1990 we wish to achieve F1991 = F1990, quota estimates for 1991 could be in the range 170,000–260,000 when all known elements of uncertainty were introduced into the model. Such a simulation provides the basis for evaluating of the risk of any quota. Thus they noted that increasing the quota from 210,000t by 5000t, doubles the risk of exceeding F1990. (Incidentally, there is a considerable interest in revisiting this particular risk evaluation following the collapse and closure of this cod fishery over the last few years).
Hilborn and Peterman, 1995, recommended that scientists avoid simply presenting a range of values. They state that advice should deal with alternative consequences of alternative hypotheses as well as alternative actions. They suggest that instead of saying that “the sustainable yield may be between 5 and 100 mt, with our best guess being 75 mt”, more fully informed discussions would result if the advice were presented in the form that “there is a 40% chance of being able to take 50 mt/yr for the next 20 years, a 50% chance of being able to take 75 mt/yr and a 10% chance of being able to take 100 mt/yr.”
The current focus on the quantification of uncertainty and risk requires a considerable amount of information and expertise. It is important for fishery advisers and managers to note that subjective views of risk based on the experience of participants, can also be applied in management. Most informed fishers and managers would agree that there is an unacceptably high risk that uncontrolled fishing on grouper spawning aggregations will lead to extinction of the aggregation, and that access to aggregations should be controlled. No assessment of the particular stock is required for management action. The data to estimate an optimal escapement may not be available, nor, due to discounting, may it be perceived as economically feasible to acquire and analyse the data, but sustainability can be achieved by limiting access.
Figure 13: A summary of fishery characteristics based on a stock-recruitment simulation for eastern Caribbean flyingfish, indicating the risk of undesirable occurrences associated with increasing levels of exploitation. These include variability in catch, and by inference, catch rate, and probability of years with ‘critically low catch’ defined as annual catch < 30% of current average catch, and ‘collapse’, defined as critically low catch for four or more consecutive years (Mahon 1989).