1. INTRODUCTION
2. EXPERIMENTAL DESIGNS
3. NUMBER OF OBSERVATIONS (OR REPLICATES) NEEDED IN AN EXPERIMENT
4. TESTING FOR SIGNIFICANT DIFFERENCES BETWEEN TREATMENTS
5. TRANSFORMATIONS
6. ISOCALORIC AND ISONITROGENOUS DIETS
7. CONTROL DIETS
8. REFERENCE
R. Hardy
University of Washington
Seattle, Washington
Preparation of an experimental design before starting experiments greatly simplifies the collection and interpretation of data. Proper formulation of the hypothesis to be tested will help keep the experiment on course and will prevent the worker from trying to answer too many questions with one experiment.
Generally, an experiment is conducted to answer a specific question. A hypothesis is formulated and the experiment is conducted to test this hypothesis. The null hypothesis (H) is that there is no difference between a control treatment and an experimental treatment. If no statistical difference is found, the null hypothesis is accepted. If a statistical difference is found between treatments, the null hypothesis is rejected.
Four experimental designs are applicable to fish studies:
(i) The completely randomized design (CRD). The CRD is used when all of the experimental units are homogeneous.(ii) The randomized complete block (RCB). The RCB is used when there is no homogeneity in one direction. The blocks are arranged to restore homogeneity among blocks.
(iii) The Latin square. The Latin square design is useful in situations where an experiment is conducted in a sectionedoff raceway where the experimental units should be shifted from one section to another periodically to randomize position effects.
(iv) The factorial design. The factorial design is used when treatments are combined and when an interaction between treatments is expected. The results can be analyzed for significance between treatments (main effects) and also significant interactions can be tested.
Treatments should be replicated and replicates should be randomized to obtain a valid error term; that is, one in which the variation within treatments is minimized by randomization so that variation within each treatment is similar and from the same population of variation. The standard error of the mean (SEM) and the coefficient of variation (CV) are two common measures used in scientific literature.
One can calculate the number of observations needed in an experiment if one has an estimate of the variance (s^{2}) and knows the level of significance desired to show a significant difference. There are two ways to calculate this  a long way which has a confidence probability of 8090 percent and a quick and shorter way which has a confidence probability of 50 percent. The long way is described below.
To determine the number of replicates ®, one needs to have an estimate of variance (s^{2}), a difference estimate (d ^{2} = difference between control and experimental treatments), confidence probability (t_{0}), and the level of significance probability (t_{1}).
_{}
If the expected difference between treatments is 2.5 percent in weight gain or amount of food eaten, and the variance (s) of this measurement in other experiments with the same species of fish is 1.31, the calculations would be as follows for an experiment with four treatments:
for r = 6, 
t_{0.2} with 25 df = 
2.060 
(from t table) 

t_{0.05} with 25 df = 
1.316 
(from t table) 


3.176 

for r = 5, total df = 20, s0 
t_{0.2 }= 
2.086 

t_{0.05 }= 
1.325 


3.411 
If there are five replicates, one can be 90 percent confident of picking up a difference of 2.5 percent, which would be a significant difference at the 5 percent level in a twotail t test.
If two treatments are used, a simple t test is often the best method to employ. If three or more treatments are used, analysis of variance is the method of choice. In analysis of variance the variation in the experiment due to treatments can be separated from the variation due to experimental error, and thus significant differences between treatments can be detected.
If data does not fit the model used in statistics, it can sometimes be transformed so that it does. Three common transformations are
(i) the square root;
(ii) logarithmic;
(iii) arc sine.
A discussion of transformations, experimental design and general statistics, can be found in a general textbook, like "Principles and Procedures of Statistics" by Steel and Torrie.
In order to truly answer hypotheses, most diet experiments must be designed such that all factors are the same between treatments except the one factor being studied. Ingredient testing, where, in a given formulation, one ingredient, say cottonseed meal, is replaced with another ingredient, say meat and bone meal, is a very slow and inefficient way to arrive at the "ideal" formulation. Changing ingredient prices makes an "ideal" formulation an everchanging one anyway, unless cost is not a consideration. Diet experiments have basically one of two aims:
(i) to determine the requirements of fish for specific nutrients; or(ii) to measure how well a particular feedstuff or feed formulation satisfies the nutritional requirements of fish or how available the nutrients of a particular feed are to fish.
One common mistake in diet studies is the failure to insure that dietary treatments have the same caloric density (isocaloric) or the same protein level (isonitrogenous) when they should. For example, if an experiment is designed to measure the optimum protein level for growth of, say, 2050 g coho salmon in saltwater pens, one approach might be to formulate practical diets that varied in protein level from 3045 percent in 2.5 percent increments. Protein level could be increased by adding herring meal to the diet and removing some basal feed, like wheat middlings, from the diet. The ME for herring meal is much higher (4 432 kcal/kg) than is that of wheat middlings (1 663 kcal/kg). Thus, simply exchanging ingredients will change the energy level of the diet as well as the protein level and confound the experiment with an extra variable. Energy must be added to some of the diets to make them isocaloric.
The same principle can be applied to experiments that include diets with different protein sources as dietary treatments. In this case, care must be taken to make sure that the growth effects being measured are due to protein source rather than protein level. The diets are made isonitrogenous by varying the level of the protein sources according to the protein content of each.
It is often difficult to compare the results of growth experiments done at different research stations or from year to year at the same station because of differences in stock of fish used and rearing conditions. This situation could be simplified by the use of standard control diets, such as the Oregon test diet or test diet H440 (Halver's diet). Researchers around the world have experience with these diets and know what kind of growth to expect from their use. On the other hand, not all experiments with test diets lend themselves to the use of appropriate reference diets.
Steel, R.G. and J.H. Torrie, 1960 Principles and procedures of statistics. New York, McGrawHill. Book Co.