Previous Page Table of Contents Next Page


As mentioned in the introductory section, the presented approach attempts to assist statistical developers with some practical guidance prior to the implementation of data collection operations and when very little is known about the population under study. The described preliminary assessment of data collection requirements is based on geometrical rather than probabilistic criteria and it would therefore be viewed as a conventional mathematical exercise rather than an unconventional statistical one. The author is of the opinion that such a priori guidance would offer an optional methodological supplement well compatible with conventional statistical techniques and tools that are commonly applied in subsequent phases of data analysis and parameter estimation.

The presented method is in line with a number of theoretical and empirical facts. For instance, researchers and statistical developers have long known and made use of the principle that sampling efficiency has a breakpoint at a critical sample size equal to the square root of the population size. With respect to reduced sampling efficiency when dealing with concave populations, empirical knowledge indicates that sampling for fishing effort is a less robust and more risky approach than sampling for catches and landings. However, the author has been unable to trace a practical guide summarizing such observations and providing a practical means for assessing the benefits and risks in an a priori selection of sample size.

It is also recognized that a weak element in the preparation of this paper was the lack of appropriate primary literature on aspects related to sampling efficiency. It is quite conceivable that several propositions and conclusions included in this note are already available in textbooks and technical papers. For instance, the fact that sampling efficiency has a breakpoint at a sample size equal to the square root of the population size, is known to be a mathematically proved property but, regrettably, the author was unable to trace it through the literature available to him.

Another factor that reduced considerably the amount of supporting literature was that the author, from the very beginning, aimed at deriving data collection indicators with the assumption that no sampling has taken place yet, thus excluding all a posteriori statistical techniques for assessing sampling efficiency. The closest the author came by to the use of accuracy as a means of sampling efficiency is in Cochran's textbook on Sampling Techniques (1963), where the error of estimate is expressed as the absolute value of the difference between the sample estimate and the true population mean.

With respect to the methodology used in the presented study a question may arise concerning the somewhat lengthy approach by means of which the geometrical accuracy limits have been formulated. The author is convinced that the same results would be achieved by a simpler and more elegant mathematical process and that some further research to achieve that would be of benefit to the potential users of the method. However, in the absence of a simpler solution the adopted approach is based on the following rationale:

(a) Rather than inferring population properties from samples, the method attempts to infer the behaviour of samples from population properties;

(b) Samples are not viewed as standalone cases but as parts of a progressive sampling pattern;

(c) Emphasis is on "safety in sampling" and with this view wide use is made of "pessimistic" assumptions. WOMAs are used to determine the lowest accuracy level to be used as the starting point of a boundary curve. A lower limit of the areas formed by exponential A-curves is used to construct the boundary curves A_(x).

A second question would concern the reason for setting-up accuracy boundaries instead of predicting expected accuracy level at any sample size. In an earlier attempt to correlate accuracy to sample size, the author had derived an analytical model predicting the expected accuracy level at any given sample size. This was based on easily derived expected accuracy levels separately computed for concave and non concave populations.

However, construction of expected accuracy curves, though of good utility if viewed as a supplementary indicator, would tend to predict acceptable accuracy levels even below critical sample size thus making users less aware of possible shortcomings resulting from accuracy fluctuation.

As a last remark it could be said that the presented method stresses the point that in assessing sampling requirements a target population ought to be viewed as a unique case and handled with criteria and sampling practices specific to its size and properties. This means that adapting criteria and practices applicable to other populations, however effective these are known to be, would not always constitute an appropriate approach. Experience shows that statistical developers, author including, tend at times to think in terms of proportionality and assume that if a sampling proportion has proved adequate for a population of size A it would be expected to operate well also for a population of size B. The presented approach indicates that if a sampling proportion is good for a large population it would definitely not be as good for a much smaller population. Conversely, if a sampling proportion has known to be effective for a small population, a much larger population would certainly require a lower sampling proportion for achieving the same level of accuracy.

Previous Page Top of Page Next Page