Previous Page Table of Contents Next Page


APPENDIX C: STOCHASTIC PRODUCTION FRONTIERS


Stochastic production frontiers were initially developed for estimating technical efficiency rather than capacity and capacity utilization. However, the technique also can be applied to capacity estimation through modification of the inputs incorporated in the production (or distance) function. A potential advantage of the stochastic production frontier approach over DEA is that random variations in catch can be accommodated, so that the measure is more consistent with the potential harvest under “normal” working conditions. A disadvantage of the technique is that, although it can model multiple output technologies, doing so is somewhat more complicated, requires stochastic multiple output distance functions, and raises problems for outputs that take zero values (Paul, Johnson and Frengley, 2000).

The underlying theory

A production function defines the technological relationship between the level of inputs and the resulting level of outputs. If estimated econometrically from data on observed outputs and input usage, it indicates the average level of outputs that can be produced from a given level of inputs (Schmidt, 1986). A number of studies have estimated the relative contributions of the factors of production through estimating production functions at either the individual boat level or total fishery level. These include Cobb-Douglas production functions (Hannesson, 1983), CES production functions (Campbell and Lindner, 1990) and translog production functions (Squires, 1987; Pascoe and Robinson, 1998).

An implicit assumption of production functions is that all firms are producing in a technically efficient manner, and the representative (average) firm therefore defines the frontier. Variations from the frontier are thus assumed to be random, and are likely to be associated with mis- or un-measured production factors. In contrast, estimation of the production frontier assumes that the boundary of the production function is defined by “best practice” firms. It therefore indicates the maximum potential output for a given set of inputs along a ray from the origin point. Some white noise is accommodated, since the estimation procedures are stochastic, but an additional one-sided error represents any other reason firms would be away from (within) the boundary. Observations within the frontier are deemed “inefficient”, so from an estimated production frontier it is possible to measure the relative efficiency of certain groups or a set of practices from the relationship between observed production and some ideal or potential production (Greene, 1993).

A general stochastic production frontier model can be given by:

(1)

where qj is the output produced by firm j, x is a vector of factor inputs, vj is the stochastic (white noise) error term and uj is a one-sided error representing the technical inefficiency of firm j. Both vj and uj are assumed to be independently and identically distributed (iid) with variance and respectively.

Given that the production of each firm j can be estimated as:

(2)

while the efficient level of production (i.e. no inefficiency) is defined as:

(3)

then technical efficiency (TE) can be given by:

(4)

Hence, , and is constrained to be between zero and one in value. If uj equals zero, then TE equals one, and production is said to be technically efficient. Technical efficiency of the jth firm is therefore a relative measure of its output as a proportion of the corresponding frontier output. A firm is technically efficient if its output level is on the frontier, which implies that q/q* equals one in value.

While the techniques have been developed primarily to estimate efficiency, they can be readily modified to represent capacity utilization. In estimating the full utilization production frontier, a distinction must be made between inputs comprising the capacity base (usually capital inputs), and variable inputs (usually days, or variable “effort”). If capacity is defined only in terms of capital inputs, the implied variation in output, and thus variable effort, from its full utilization level is sometimes termed an indicator of capital utilization.

If variable inputs are assumed to be approximated by the number of hours or days fished (i.e. nominal units of effort), estimating the potential output producible from the capacity base with variable inputs “unconstrained” implies removing this variable from the estimation of the frontier. The resulting production frontier is thus defined only in terms of the fixed factors of production, or K. In particular, it will be supported by observations for the boats that have the greatest catch per unit of fixed input (which generally corresponds to the boats that employ the greatest level of nominal effort for a particular level of K). The resulting measure of technical efficiency is equivalent to the technically efficient capacity utilization (TECU); accommodating both the impacts of technical inefficiency and deviations from full utilization of the capacity base. That is, it represents the ratio of the potential capacity output that could be achieved if all fixed inputs were being utilized efficiently and fully to observed output.

Only limited attempts to estimate stochastic production frontiers for fisheries have been undertaken (Kirkley, Squires and Strand, 1995, 1998, Coglan, Pascoe and Harris, 1999, Sharma and Leung, 1999, Squires and Kirkley, 1999; Pascoe, Andersen and de Wilde, 2001; Pascoe and Coglan, 2002). These have focused upon an estimation of efficiency rather than capacity, although the capacity problem has recently been addressed by Kirkley, Morrison and Squires (2001) and Tingley and Pascoe (2003) using SPF procedures.[48] The techniques used and problems encountered are similar, and distinction between the utilization and efficiency components - thus providing an unbiased estimate of capacity utilization - requires first computing the more standard inefficiency measure.

Functional forms for the production function

Estimation of the SPF requires a particular functional form of the production function to be imposed. A range of functional forms for the production function frontier are available, with the most frequently used being a translog function, which is a second order (all cross-terms included) log-linear form. This is a relatively flexible functional form, as it does not impose assumptions about constant elasticities of production[49] nor elasticities of substitution[50] between inputs. It thus allows the data to indicate the actual curvature of the function, rather than imposing a priori assumptions. In general terms, this can be expressed as:

(5)

where Qj,t is the output of the vessel j in period t and Xj,i,t and Xj,k,t are the variable and fixed vessel inputs (i,k) to the production process. As noted above, the error term is separated into two components, where vj,t is the stochastic error term and uj,t is an estimate of technical inefficiency.

Alternative production functions include the Cobb-Douglas and CES (Constant Elasticity of Substitution) production functions. The Cobb-Douglas production function is given by:

(6)

As can be seen, the Cobb-Douglas is a special case of the translog production function where all bi,k = 0. The production function imposes more stringent assumptions on the data than the translog, because the elasticity of substitution has a constant value of 1 (i.e. the functional form assumption imposes a fixed degree of substitutability on all inputs). And the elasticity of production is constant for all inputs (i.e. a 1 percent change in input level will produce the same percentage change in output, irrespective of any other arguments of the function).

The CES production function is given by:

(7)

where q is the substitution parameter related to the elasticity of substitution (i.e. q = (1/s)-1 where s is the elasticity of substitution) and d is the distribution parameter. The CES production function is limited to two variables, and is not possible to estimate in the form given in (7) in maximum likelihood estimation (MLE) (making it unsuitable for use as the basis of a production frontier). However, a Taylor series expansion of the function yields a functional form of the model that can be estimated, given as:

(8)

The model can be estimated as a standard or frontier production function, and the parameter values derived through manipulation of the regression coefficients. The functional form in (8) can be shown to be a special case of the translog function where bi,i = bk,k = -0.5bi,k.

Given that both the Cobb-Douglas and CES production functions are special cases of the translog, ideally the translog should be estimated first and the restrictions outlined above, tested. However, the large number of variables required in the process of estimating the translog may cause problems if a sufficient data series is not available, resulting in degree of freedom problems. In such a case, more restrictive assumptions must be imposed.

Separating capacity utilization from random variations in catch

To estimate the stochastic production frontier, an appropriate functional form is assumed (i.e. Cobb-Douglas, CES or Translog production function) and the parameters of the model (including s2v and s2u) are estimated by MLE. Estimation of the maximum value of the logged likelihood function is based on a joint density function for the split error term ej = vj-uj (Stevenson, 1980). From this, technical efficient capacity utilization (TECU) can be calculated for the individual firm, given by:

(9)

where, , , and F(.) is the density function of a standard normal random variable (Battese and Coelli, 1988). From this, if g = 0, then the expected value of the TECU score is one. That is, there are no deviations due to technical inefficiency or capacity underutilization (i.e. ). If g = 1, then all deviations are due to technical inefficiency and capacity underutilization (i.e. ). Hence if 0< g<1, deviations are characterized by both TECU and a random or stochastic component (Battese and Corra, 1977). Standard estimation programmes such as FRONTIER, discussed below, may be used to compute these estimates.

In order to separate the stochastic and TECU effects in the model, a distributional assumption has to be made for uj (Bauer, 1990). From the literature on technical efficiency estimation, four distributional assumptions have been proposed: an exponential distribution i.e. (Meeusen and van der Broeck, 1977); a normal distribution truncated at zero, for example, (Aigner, Lovell and Schmidt, 1977); a half-normal distribution truncated at zero i.e. (Jondrow et al., 1982); and a two-parameter Gamma/normal distribution (Greene, 1990).

There are no a priori reasons for choosing one distributional form over the other, and all have advantages and disadvantages (Coelli, Rao and Battese, 1998). For example, the exponential and half-normal distributions have a mode at zero, implying that a high proportion of the firms being examined are perfectly efficient. The truncated normal and two-parameter gamma distribution both allow for a wider range of distributional shapes, including non-zero modes. However, these are computationally more complex (Coelli, Rao and Battese, 1998). Empirical analyses suggest that the use of the gamma distribution may be impractical and undesirable in most cases. Ritter and Simar (1997) found that the requirement for the estimation of two parameters in the distribution may result in identification problems, and several hundreds of observations would be required before such parameters could be determined. Further, a maximum of the log-likelihood function may not exist under some circumstances. Bhattacharyya et al. (1995), however, offer one approach for selecting the distribution to reflect technical inefficiency; they suggest the use of a data generating process.

Figure C.1 - Capacity utilization distributional assumptions:

(a) half-normal

(Note: )

(b) truncated normal

(Note: )

(c) exponential

(Note: )

The half-normal, truncated normal and exponential distributions of the inefficiency term are illustrated in Figure C.1. The half normal distribution assumes that the mode in the distribution is zero. This produces the greatest number of boats operating at full capacity in the estimated capacity utilization distribution (i.e. uj = 0 and hence TECU = 1 as e-0=1). In contrast, with the truncated normal, the mode of the distribution (the greatest number of observations with any particular uj score) is greater than zero. With such a distribution, the proportion of boats operating at full capacity in the sample can vary. The half-normal distribution is a special case of the truncated normal distribution, with the estimated mode being zero. Hence, the truncated normal distribution is a more general specification (out of the two), and the regression output can be tested to see if the mode (equivalent to the mean value in a non-truncated distribution) is equal to zero. The average capacity utilization in the sample is lower if a truncated normal distribution is assumed than if a half-normal distribution is assumed (unless the estimated mode of the truncated distribution is zero, in which case they are identical).

The exponential distribution also allows for a high number of boats to be operating at full capacity. While the range of TECU scores may be as great (if not greater) than under the assumption of a half-normal or truncated normal distribution, the frequency of low TECU scores is less than under the other two distributional assumptions. As a result, the average capacity utilization is likely to be higher under the assumption of an exponential distribution than under either of the other two distributional assumptions.

Time variant TECU

An implicit assumption in estimating efficiency using the above specification is that efficiency is time invariant. A number of studies have attempted to estimate time varying efficiency, allowing for technological change to affect the efficiency measurement over time. For the estimation of TECU, it would be expected that technology would change over time, and that a time variant measure would be more relevant. Note also, however, that technical change may instead be assumed to shift the frontier, and thus appear in the production function specification instead of the stochastic specification underlying the inefficiency measurement.

Cornwell, Schmidt and Sickles (1990) replace the firm effect by a squared function of time with parameters that vary over firms (i.e. ). Kumbhakar (1990) also allowed a time-varying inefficiency measure assuming that it was the product of the specific firm inefficiency effect and an exponential function of time, such that:

(10)

where Ui are assumed to be iid as truncations at zero of the N(0, su2) (half-normal case). This allows flexibility in inefficiency changes over time, although no empirical applications have been developed using this approach (Coelli, Rao and Battese, 1998).

Battese and Coelli (1992) proposed a time-varying inefficiency measure given as:

, t = 1,2,...T (11)

where uj are assumed to be iid truncations at zero of the normal distribution N(mj, su2) and h is the rate of change in efficiency over time. If h>0, the TECU term, uj,t, is always increasing over time (i.e. as (T-t) increases), whereas h<0 implies that uj,t is always decreasing with time. Hence, one of the main problems of this model is that TECU is forced to be a monotonic function of time. This not desirable, as it might be expected that capacity utilization would fluctuate from year to year, and that changes in technology would be discrete events rather than continuous. Again, this may be accommodated to some extent by including t instead in the production function specification, which for a translog model allows for cross-effects with all other arguments of the function, including potential measures of the resource stock.

Inefficiency models

In many studies of technical efficiency, the results are used to estimate the effects of various factors on inefficiency. These may be estimated using either a one-step or two-step process. In the two-step procedure, the production frontier is first estimated and the technical efficiency of each firm, derived. These are subsequently regressed against a set of variables, Zit, which are hypothesized to influence the firm’s efficiency. This approach has been adopted in a range of studies (e.g. Kalijaran, 1981; Pitt and Lee, 1981).

A problem with the two-stage procedure is a lack of consistency in assumptions about the distribution of the inefficiencies. In the first stage, inefficiencies are assumed to be independently and identically distributed (iid) in order to estimate their values. However, in the second stage, estimated inefficiencies are assumed to be a function of a number of firm-specific factors, and hence are not identically distributed (Coelli, Rao and Battese, 1998).

Kumbhakar, Ghosh and McGuckin (1991) and Reifschneider and Stevenson (1991) estimated all of the parameters in one step to overcome this inconsistency. The inefficiency effects were defined as a function of the firm-specific factors (as in the two-stage approach), but were incorporated directly into the MLE. Battese and Coelli (1995) also suggested a one-step procedure for using the model (now accounting for time), such that:

(12)

and the mean inefficiency is a function of firm-specific factors, such that:

(13)

where Z is the vector of firm-specific variables which may influence the firm’s efficiency, d is the associated matrix of coefficients and Wj,t is an iid random error term.

Huang and Liu (1994) proposed a non-neutral stochastic frontier model. This is estimated by regressing the inefficiency term upon two sets of variables, Zit and Zit*, the first representing some firm-specific variables which may influence the firm’s efficiency and the latter variables representing the interactions between Zit and the input variables in the stochastic frontier, such that:

. (14)

This allows movement of the function to be biased towards certain inputs. However, it again imposes an assumption that the inefficiency determinants are linearly related to efficiency.

The various approaches discussed thus far raise the question of whether or not these determinants of efficiency should be accommodated in the production function specification itself, or as determinants of measured inefficiency. We would think that it would be preferable to consider as many production determinants as possible in the technological specification, rather than in the stochastic specification, to represent their productive effects (marginal products) directly. This reduces the potential for calling something “inefficiency” when it may be explainable by the effective level of the productive inputs. This is particularly important if the efficiency and utilization components of overall deviations from the frontier are to be distinguished separately, which is important for unbiased estimation of capacity utilization. Appropriate representation of the characteristics of inputs, such as those comprising the “power” embodied in the capacity base, is critical for interpretable and usable capacity and utilization estimates.

“Unbiased” estimates of capacity output

As noted above, the stochastic production frontier approach was developed primarily to estimate technical efficiency. It also can be modified to produce estimates of capacity and capacity utilization by removing the constraining influence of variable inputs in the production function, usually represented for the fishery by a measure of “effort”, such as days or hours fished. The resulting “efficiency” score will combine both capacity utilization and technical inefficiency. Full efficiency capacity output can be estimated by scaling up current output by the efficiency score generated from this estimation process (i.e. by dividing current output by the efficiency score). However, this may be a biased measure of capacity output, because under normal working conditions it would be expected that most of the fleet would be operating at less than full efficiency, due at least in part to mis- or un-measured factors of production.

To reduce these distortions, an unbiased measure of capacity utilization may be derived by dividing the combined measure of capacity utilization and efficiency by the efficiency scores estimated in the traditional manner (e.g. estimated with the measure of capital utilization such as days or hours fished), such that:

(15)

where TECU is the combined measure of capacity utilization and efficiency and TE is the efficiency score computed for the full production function relationship with the contribution of variable inputs incorporated rather than removed. This will result in a higher estimate of capacity utilization (i.e. as TE £ 1, CU ³ TECU).

Capacity output is estimated by dividing the actual catch by the capacity utilization measure, or multiplying by the inverse capacity utilization ratio, 1/CU, often called a measure of overcapacity, such that:

(16)

This can be estimated for every observation for every boat, and aggregated across the fleet to provide estimates of total capacity in each time period examined.

Data requirements - panel, cross sectional and time series data

In order to separate out the effects of random fluctuations in output from systematic differences due to inefficiency and capacity utilization, the estimation of TECU ideally requires repeated observations for the same boat. This requires a time series of information for a cross-section of boats in the population. This is generally referred to as panel data. Panel data may be balanced or unbalanced. Balanced panel data exist where there are an equal number of observations for all boats in the sample and every boat operates in every time period of the data. Unbalanced panel data occur when there are not an equal number of observations for each boat, and/or the boats do not operate in every time period of the data.

A difficulty with unbalanced panel data is that different sets of boats may be compared in different time periods, and there may be instances where some boats are not directly compared. Estimation is readily carried out for unbalanced panel data using programmes such as FRONTIER. But since efficiency and capacity utilization are relative (rather than absolute) measures, estimation may be problematic if there are only a few boats in the sample for given time periods, so that the boats are only compared to a small number of other boats in the same period. Ideally, the data set should be broad enough for this not to occur, and ideally every boat should operate in the same period with every other boat (not all at the same time necessarily) at least once (and preferably more times). Time periods when only a few boats are operating should be excluded from the data set. Similarly, boats that have only a few observations should be excluded from the sample, as their efficiency score will be measured relative to only a few other boats in only a few time periods. This requires a subjective assessment about which observations to exclude. For example, Pascoe and Coglan (2002) included boats that had observations for at least four months a year in at least three of the four years of the data. This resulted in only 63 boats out of a possible 457 being included in the analysis. In contrast, Kirkley, Squires and Strand (1995, 1998) limited their analysis to only 10 boats for which a long and consistent time series was available.

When cross-sectional data only are available (i.e. only one observation per boat), a strict assumption about the distribution of the inefficiency term is required. Resulting estimates of TECU will conform to the imposed distribution, and it is not possible to statistically distinguish between the nested distributions (i.e. half-normal and truncated normal). Similarly, if an inefficiency model is imposed, the TECU measures will conform to the model. Statistical measures of the parameters in the inefficiency model are not reliable. Consequently, there is little benefit in imposing such a distribution onto the data, and it is preferable to use the standard distributions (i.e. half- or truncated normal).

Despite these concerns, Sharma and Leung (1999) developed their model using cross-sectional data only and imposed an inefficiency model onto the data. As would be expected, most of the parameters were non-significant, with only one variable defining the inefficiency distribution at the 5 percent level of significance.

When only aggregated time series data are available, the estimation encounters similar problems to that of only cross-sectional data. While TECU can be estimated for each year for the fleet as a whole, it is highly sensitive to underlying assumptions about the TECU distribution.

Output measures

Although the SPF approach can be used to estimate efficiency and capacity of a multispecies fisher or a multiple product technology, it is computationally complex to undertake. As a consequence, researchers often aggregate over different outputs to construct a composite output (e.g. cod plus haddock equal groundfish). The resulting capacity and capacity utilization estimates will, however, reflect the aggregated output, and may therefore yield inadequate estimates of capacity relative to individual species or products.

When data are limited, only aggregate output catch data may be available, which precludes consideration of relevant aggregation. Estimation of capacity and capacity utilization, however, may be influenced by changes in the catch composition, particularly if some species in the catch fluctuate substantially from year to year. These factors should be taken into consideration when reviewing the results of the analysis.

When data are available on a species basis, they need to be aggregated into a composite output measure. One method is to use the prices of the species as weights to estimate the total value of output. This approach is valid if it is reasonable to assume that fishers aim to maximize the value of their catch rather than the quantity.

The use of aggregate value of the multi-product firm as the output measure has implications for analysis. First, value is a factor of prices as well as quantity, so that price changes may affect the measurement of capacity utilization. A price index may be constructed to deflate the value series to remove general inflationary price changes and relative price changes between species, leaving only relevant “effective value” impacts such as quality changes. Details on the construction of such indexes are given in Coelli, Rao and Battese (1998). Further, if fishers may be assumed to be profit maximizers, changes in relative prices may result in changes in fishing strategy. As a result, the function is not truly a production function and the TECU scores may represent a combination of allocative as well as technical efficiency.

The potential biases introduced into the analysis by using value as the output measure are not likely to be large. Squires (1987) and Sharma and Leung (1999) note that fishers base their fishing strategies on expected prices, the level of technology and resource abundance. However, price expectations are not always accurate, fishing gear is not species selective (so the species mix is a function of seasonal abundance) and changing gear types is time consuming and generally needs to be done onshore before the trip rather than at sea. Hence, the ability of fishers to immediately respond to changes in relative prices is limited. Finally, the effects of changes in price on the level of outputs can be incorporated through the use of a stock index as an input in the model that is based on the value of the available resource (i.e. stock multiplied by price).

Software packages

Two packages are generally available for estimating stochastic production frontiers - FRONTIER 4.1 (Coelli, 1996a) and LIMDEP (Greene, 1995). Both packages use the MLE approach. A recent review of both packages is provided by Sena (1999).

FRONTIER 4.1 is a single purpose package specifically designed for the estimation of stochastic production frontiers and technical efficiency, while LIMDEP is a more general package designed for a range of non-standard (i.e. non-OLS) econometric estimation. An advantage of the former model (FRONTIER) is that estimates of efficiency are produced as a direct output from the package. The user is able to specify distributional assumptions for estimating the inefficiency term in a programme control file. In LIMDEP, the package estimates a one-sided distribution, but separation of the inefficiency term from the random error component requires additional programming.

Table C.1 - Distributional assumptions allowed by the software

Distribution

LIMDEP

FRONTIER

Time invariant firm specific inefficiency



· Half-normal distribution

· Truncated normal distribution

· Exponential distribution

X

Time variant firm specific inefficiency



· Half-normal distribution

X

· Truncated normal distribution

X

One step inefficiency model

X

Source: Sena (1999).

FRONTIER is able to accommodate a wider range of assumptions about the error distribution term than LIMDEP (Table C.1), although it is unable to model exponential distributions. Neither package can model gamma distributions. Only FRONTIER is able to estimate an inefficiency model as a one-step process. An inefficiency model can be estimated in a two-stage process using LIMDEP, although this may create biases, because distribution of the inefficiency estimates is pre-determined through underlying distributional assumptions.

In the literature, the most commonly used package for estimating stochastic production frontiers is FRONTIER 4.1. This is freely available over the Internet from the Centre for Efficiency and Productivity Analysis, University of New England, Australia (http://www.une.edu.au/econometrics/cepa.htm). User guides and examples also are provided when downloading the software.

Example of use: Nigerian artisanal fishery

The Nigerian data used in the peak-to-peak analysis also was used to estimate capacity utilization and capacity output from the stochastic production frontier approach. An advantage of the SPF approach is that more inputs can be incorporated into the analysis than in the peak-to-peak approach. In this case, both the number of canoes and average crew per canoe could be used in the production frontier estimation.

With such a limited data set, a number of assumptions were necessary. First, a simple Cobb-Douglas production function was assumed of the form where Q is the actual total output, and the b’s are parameters to be estimated. To estimate the model, the variables are logged to produce a linear version of the model (i.e. , where ln(x) represents the natural log of the variable x). As there were no data representing capital utilization (e.g. hours or days fished), only estimates of technically efficient capacity utilization (TECU) are possible, and resulting estimates of capacity may be overestimated (as CU ³ TECU from (15)).

As the data were aggregated, there was only one observation a year. An assumption was made that capacity utilization would vary over time, so a time-variant measure was required. In estimating the capacity utilization in each period, an assumption also had to be made about the distribution of the measures. Both the half-normal and the truncated normal were tested.

The models were estimated using FRONTIER 4.1. The results of the maximum likelihood estimation are given in Table C.2.

Table C.2 - Results from Maximum Likelihood Estimation


Half-normal

Truncated normal


Coefficient

t-value

Coefficient

t-value

beta 0

-8.74

-1.96

-7.51

-7.53

beta 1

1.77

5.08

1.68

11.14

beta 2

0.52

1.44

0.35

0.36

sigma-squared

0.20

0.68

0.03

1.31

Gamma (g)

0.89

5.17

0.07

0.09

Mu (m)

restricted to be zero


0.10

3.66

Eta (h)

-0.53

-1.35

-0.27

-0.51

log likelihood function

8.55


7.03


LR test of the one-sided error

4.86 (2 restrictions)


1.82 (3 restrictions)


From Table C.2, the values beta 0, beta 1 and beta 2 refer to the coefficients of the production function outlined above. The value of gamma (g) indicates the proportion of variation in the model that is due to capacity utilization. Since this value is relatively high in the half normal distribution model (0.88), it suggests that much of the variation not due directly to changes in the level of fixed inputs is due to changes in capacity utilization. In contrast, the value of g in the truncated-normal model is low (0.07) and not significantly different from zero, suggesting that very little variation in output between years is due to differences in capacity utilization.

A series of tests can be conducted to test the specification of the models. These are tested through imposing restrictions on the model and using the generalized likelihood ratio statistic (l) to determine the significance of the restriction. The generalized likelihood ratio statistic (also known as the LR test) is given by:

(17)

where ln{L(Ho)} and ln{L(H1)} are the values of the log-likelihood function under the null (Ho) and alternative (H1) hypotheses. The restrictions form the basis of the null hypothesis, with the unrestricted model being the alternative hypothesis. The value of l has a c2 distribution with the degrees of freedom given by the number of restrictions imposed.

A major test used to determine the existence of a frontier (i.e. Ho: g=0) is the one-sided generalized likelihood ratio test of Coelli (1995). Since the alternative hypothesis is that 0 < g < 1, the test has an asymptotic distribution, the critical values of which are given by Kodde and Palm (1986). If the hypothesis is accepted, there is no evidence of underutilization of capacity in the data and the production frontier is identical to a standard production function.

FRONTIER 4.1 produces the results from the one-sided generalized likelihood ratio-test automatically. From the model results, the values of the LR test can be seen to be 4.86 and 1.82 for the half normal and the truncated normal models respectively. These can be compared with the critical value table published in Kodde and Palm (1986) for two restrictions and three restrictions (representing the ‘degrees of freedom’ in the model) respectively. Standard statistical practice is to compare the results at the five percent level of significance, which allows for less than a five percent probability that the results are spurious (i.e. a 95 percent probability that the relationship is valid). The critical value at the five percent level of significance is 5 138 with two degrees of freedom and 7 045 for three degrees of freedom.

In this case, both models do not satisfy the requirements of the test (because the values are less than the critical values), which suggests that in both cases estimates of capacity utilization may be spurious. However, the half-normal model satisfies the requirements at the ten percent level of significance (critical value of 3 808). This suggests that there is a ten percent probability that the results are spurious. While this is generally not considered sufficient to accept the results, for the purposes of this example the results will be assumed to be valid.

The models also can be compared using the LR test. The half-normal is a restricted form of the truncated normal with the restriction that m (mu) = 0. The value of the generalized likelihood ratio statistic in this case is . Since the value is negative (and hence will be less than the critical c2 value, which is always positive), we cannot reject the hypothesis that Ho: m = 0 and accept the model which assumes the half-normal distribution.

The estimated capacity utilization derived from the analysis, and consequently the estimated capacity output based on the results, are given in Table C.3. The gradual decline in capacity utilization is partly an artefact of estimating the time variant model, which only allows for a constant rate of change over time. From Table C.2, the value of eta (h) was negative, suggesting technological change had decreased efficiency over time. More likely, this decline represents a decline in stock size over the period examined.

Table C.3 - Capacity utilization and output estimated using SPF

Year

Production

Capacity Utilization

Capacity output

1976

327 561

1 000

327 571

1977

331 280

1 000

331 297

1978

336 138

1 000

336 167

1979

356 888

1.000

356 941

1980

274 158

1 000

274 226

1981

323 916

1 000

324 053

1982

377 683

0.999

377 954

1983

376 984

0.999

377 442

1984

246 784

0.998

247 292

1985

140 873

0.997

141 365

1986

160 169

0.994

161 118

1987

145 755

0.990

147 221

1988

185 181

0.983

188 347

1989

171 332

0.972

176 322

1990

170 459

0.953

178 948

1991

168 211

0.921

182 627

1992

184 407

0.870

211 902

1993

106 276

0.791

134 365

1994

124 117

0.674

184 164

A comparison of the estimates of capacity from the SPF and the Peak-to-Peak methods is illustrated in Figure C.2. For purposes of illustration, only the results from using canoes as the key input for the peak-to-peak analysis are presented. From this, it can be seen that both techniques produce estimates with fairly similar trends, and the estimates of capacity in the last few years (1989-94) of the data are fairly similar.

The analysis presented here demonstrates that the SPF technique can be used to provide estimates of capacity and capacity utilization with minimal data requirements. More detailed data at the boat level disaggregated over time (e.g. monthly data) would result in more detailed estimates of capacity and capacity utilization. Further, information on time fished (e.g. hours or days) would allow estimates of technical efficiency also to be made, enabling correction for the potential biases that may be introduced into the analysis.

The example analysis also excludes a measure of the biomass stock. Ideally, some stock abundance measure might be incorporated into the analysis so the effects of changes in stocks on potential output can be estimated, providing more reliable estimates of capacity utilization.

Figure C.2 - Comparison of SPF and peak-to-peak estimates of capacity output


[48] Pascoe and Coglan (2000) estimated the effects of variations in efficiency upon physical capacity measures used in the UK and demonstrated the problems associated with assuming homogeneity in physical inputs.
[49] This represents the percentage change in output from a 1 percent change in the input level.
[50] This represents the degree to which one input is able to substitute for another as a result of relative input price changes while still holding output constant. The values range from 0 (which indicates the inputs are used in fixed proportions and are not substitutable) to infinity (in which case the inputs are perfectly substitutable and their use is highly responsive to relative price changes).

Previous Page Top of Page Next Page