Without a defined quality assurance programme all analytical results must be suspect.
(Harnly and Wolf, 1984)
The current uses of food composition data depend on the reliability of these data; however, achieving reliability and demonstrating that it has been achieved require a systematic and documented approach. There is now an extensive literature on analytical quality control for food analysis. Efforts to improve and standardize analytical quality at the inter-national level have been advanced by organizations such as the International Organization for Standardization (ISO, 2003) and by the application of formalized principles such as good laboratory practices (GLP) (OECD, 1992, 1999) and total quality management (TQM) (Parkany, 1995).
The criteria for data to be entered into food composition databases were discussed in Chapter 1. To summarize, food samples should be representative of foods as consumed, as available for consumption or as produced (e.g. data for raw foods or commodities). The values should accurately represent the food samples analysed (see Table 8.1). It follows, then, that
the basic principles of producing good-quality data are attention to:
Sampling and methods of analysis have been addressed in Chapters 5, 6 and 7; this chapter deals with the latter two topics.
Table 8.1 Activities for assuring the quality of data
|Design of sampling protocol
Execution of sampling protocol
Preparation of analytical samples and portions
|Food samples are representative of the
"as consumed" , as " available for consumption"
or as produced (e.g. for compositional dataat
the commodity level)
|Choice of analytical method
Execution of analytical procedures with
appropriate number of samples and
Evaluation of analytical values
|Analyses provide reliable values for the composition of representative samples of the foods|
Definitions of data quality, quality control and quality assurance used in this text (Table 8.2) are derived from those proposed by the International Organization for Standardization (ISO, 2003) for application to either a product or a service.
In practical terms, “quality assurance” is the sum of all the activities taken to ensure that the information generated by the laboratory is correct (Wilcox et al., 1978). This should be a deliberate process, not left to chance or brought into operation only when inadequacies are identified. A well-designed quality assurance programme (QAP) also provides laboratory workers and their supervisors with objective measures of performance, and an indication of whether or not the laboratory is achieving its goals.
Quality control has a much narrower meaning than quality assurance; it usually refers to procedures that are designed to ensure that data quality remains within defined limits. These include standards of precision and accuracy of the analytical operations, which depend on criteria set by the users of compositional data and the compilers of databases. The quality control standards set by analytical chemists may be unnecessarily strict for most nutritional purposes; however, quality control is still vital to ensure that bias is not introduced.
Table 8.2 Terminology in quality assessment
Summary of all the features that make the values appropriate for the intended use
The operational techniques and activities that are used to satisfy quality requirements
The assembly of all planned and systematic actions necessary to provide adequate confidence that a product, process or service will satisfy given quality requirements
Good laboratory practices
The organizational process and the conditions under which laboratory studies are planned, performed, monitored, recorded and reported
The aim of quality control, then, is to produce food composition data that meet the required standards and that can be produced efficiently and economically. This achievement requires the integration of several related steps: the proper specification of the quality of data required; production of the data to meet the intent of the specification; evaluation of the data to determine whether or not they meet the specification; and review of data usage to provide for the revision of the specification.
The term quality control is often used in only the narrowest sense (i.e. the monitoring of the performance of analytical methods) (BŁttner et al., 1975); it should in fact cover all aspects of the analytical process from food sample collection, handling and treatment of analytical samples, standards preparation, signal measurement and method validation, to data handling and evaluation (Harnly and Wolf, 1984; Garfield, 1984).
Quality assurance within the laboratory is implemented in three major modes:
The central feature of a quality assurance programme is the proper documentation of all the activities involved in the production of compositional data, from the design of the sampling protocol to the final production of analytical data.
The activities involved in quality assurance should include:
Quality assurance is effected through GLP, which comprise three major areas: management, quality control of sampling and quality control of analytical method performance.
Management is the general function of directing the food analysis laboratory to attain its goals. It not only involves administrative functions, but also determining how the laboratory operates, what it is to accomplish, and whether or not it accomplishes what it sets out to do. The tasks of management in the present context are as follows:
Effective management is required for three areas of critical importance in laboratory function – the physical environment, personnel and administration.
Many food composition laboratories are less than ideal as physical structures. However, much can be achieved in adverse conditions, especially if the available space is well-organized and attention is paid to safety. Horwitz et al. (1978) list the following special needs of a food analysis laboratory: extremely good ventilation and fume hoods because of the extensive use of solvents and evolution of toxic and corrosive fumes; adequate power for heaters and instruments; high quality and volume of distilled (or deionized) water for reagent preparation and aliquot dilution; freedom from contamination – environmental (lead, asbestos, etc.), laboratory-generated (mercury, fumes, etc.) and housekeeping (dust, insects, rodents, etc.); and a large storage capacity for samples and reagents, including refrigerator and freezer space. Special facilities for the analysis of certain nutrients may be required, such as a “clean” environment for trace minerals and special lighting for light-sensitive nutrients. Few laboratories have such a complete range of specialized facilities, but the above list may be helpful in planning for upgrading of an existing laboratory. Practical advice is also available in a review by Rappoport et al. (1978).
As far as equipment is concerned, many laboratories may not be in a position to pick and choose. The main criterion is that the equipment be able to perform the tasks set. Specialized equipment and/or automation can lead to higher levels of precision and generally improve the quality control of analyses, but are not an essential prerequisite for sound analytical work.
Schedules for regular servicing, testing and replacement of equipment are helpful, and attention should be paid to safety and security; these topics are discussed in detail by Wilcox et al. (1978).
Selection and training of staff are critical, as is the opportunity for updating of skills. Ideally, each employee should have a clear job description and a clearly defined path for reporting to a supervisor. A high level of motivation is essential for good-quality work. It is best achieved by setting clear objectives and ensuring that the analysts see their role clearly in the operation as a whole. In all laboratory work, the worker at the bench is the main determinant of analytical quality, and this fact must be understood by the bench worker and by those responsible at all levels. Ideally, each employee should feel that her or his own work counts and that good-quality work is not only a team responsibility but also a team achievement.
Many laboratories conduct food composition work under contract by staff employed on a short-term basis. Maintenance of morale in such staff, though difficult, is an important objective of the programme.
Administration includes all aspects of paperwork in the laboratory. All the laboratory's procedures should be included in a quality assurance manual (QAM) that includes instructions on sampling, methods of analysis and quality control procedures. Further, a system must be designed and used for registering all food samples arriving in the laboratory. This register includes all the information required for identifying the food sample (see Chapter 5) and is linked to the recording of the final analytical results. This system accounts for all the samples arriving in the laboratory. The preparation of the manual formalizes procedures and, provided that the laboratory staff are encouraged to contribute comments and suggestions, assists in the development of good laboratory practice.
It is important that the manual is used by the staff whose working procedures it provides. There is a danger that if the QAM is seen as an end in itself and not intended for use and to provide guidance it will not fulfil the objectives behind its preparation.
Staff should be encouraged to keep well-organized laboratory notebooks, and standard data sheets need to be developed for recording of the final analytical values. The separate but related process of setting up recording systems also provides a disciplined approach to the laboratory work and can identify potential problems. It is prudent, however, to test a new system in a pilot study before implementing it and to recognize that modifications may be required over time. A good recording system facilitates searches back through all calculations and measurements to identify and correct any errors that arose in recording.
Sampling is discussed in detail in Chapter 5; it is only necessary to stress here that quality control of sampling is the crucial first step in the entire quality assurance programme, and that the analytical staff should be involved in the design of the sampling plans. Indeed, direct participation in collection of the food samples provides the analyst with insight into the practical problems of sampling. The necessity for defined sample handling procedures in the laboratory must also be regarded as the concern of the analyst.
(from Horwitz et al. , adapted, with permission) The third major area of implementing laboratory quality assurance is in the quality control of the performance of the analyses. In food composition studies a great deal of attention must be paid to this since all food samples received for analysis should, in principle, be treated as having an unknown composition.
The performance of an analytical method requires validation of the entire system (Horwitz et al., 1978): the laboratory with its environment, equipment and reagents; the analyst, with her or his individual skills, experience and knowledge; and the method, with all of its idiosyncrasies and attributes.
The method is selected by the relative importance of the various attributes, as a result of previous experience or on the basis of reports in the literature. Choice of method is discussed in Chapters 6 and 7. However, it is essential for a laboratory to verify that the method performs properly in actual practice. As discussed in Chapters 6 and 7, each food substrate may present an entirely different set of problems for the analysis of any constituent. The selection or production of an appropriate standard matrix can require considerable skill and ingenuity.
First, the quality required of the analytical data must be specified. These specifications will be based on reliability criteria discussed in detail in Chapters 6 and 7 (specificity, accuracy, precision, sensitivity [BŁttner et al., 1975]), and will depend on both the component to be analysed and the matrix in which it occurs.
For example, in setting specificity criteria for analyses of vitamin C, it is essential that the method measures only ascorbic acid and dehydroascorbic acid, both of which exhibit vitamin activity. Interferences in most vitamin C methods are reasonably well understood by analysts, and may be controlled or allowed for in the analysis. For other nutrients, methods that measure a wide range of substances may be adequate (Chapter 7). Some components are hard to define analytically, and for these the currently available analytical methods are likely to be superseded.
The level of accuracy to which an analysis is conducted and reported should be set at a certain number of significant figures, dependent on the precision of the method. Three significant figures are (in most cases) the maximum required in a food database, but more (and often spurious) significant figures are generated in many analytical systems. In nutrient analyses, the pursuit of accuracy in order to cite values to four or five significant figures implies a false view analytically (because no method has this degree of accuracy) and is a misdirection of resources.
The precision required should be related not only to the method itself, but also to the expected level of the nutrient. As with accuracy, it may be wasteful to devote resources to improving precision if the level of the nutrient in the food is low in relation to the dietary intake as a whole, or if the food is rarely eaten. It is essential to establish realistic criteria for acceptable kinds of precision; improvement to values that fall within less than 10 percent of the mean may be unnecessary. Stewart (1980) has suggested acceptable precision and accuracy standards for nutrient composition studies.
Wilcox et al. (1978) list the following as the most common causes of error in method performance:
Verification of method performance – an essential step when a method is introduced into the laboratory – can be carried out by the following techniques (Chapters 6 and 7 discuss the range of procedures used in validating methods when selecting analytical procedures):
1. Standard samples. Ideally, standards would be prepared containing known amounts of the constituent of interest, in the same physicochemical form and in a similar food matrix to the one to be analysed. Clearly this ideal is virtually impossible to achieve, but various substitutes are available for use as standards.
Reference materials (RMs) and standard reference materials (SRMs) support accurate and compatible measurements by certifying and providing samples with well-characterized composition. These materials are used to perform instrument calibrations in situ as part of overall quality assurance programmes, to verify the accuracy of specific measurements and to support the development of new measurement methods. The RMs and SRMs are for use in determining the nutrient contents of mixed diets and individual food matrices. The SRMs are certified for dietary constituents as ash, protein, carbohydrate, fat, energy, cholesterol, selected fatty acids, vitamins, selected minerals and trace elements. In the United States the National Institute of Standards and Technology (NIST, 2003a) provides many SRMs.
In Europe, the Institute for Reference Materials and Measurements (IRMM, 2003) operates as part of the Directorate-General Joint Research Centre of the European Commission. It provides certified reference materials (CRMs) in a variety of food matrices for macrocomponents, major and trace elements, 15 vitamins, five different fibre methods and other food components.
The SRMs and CRMs are as a rule rather expensive; they may be regarded as too costly to use routinely and alternatives must often be used.
For this reason, ASEANFOODS undertook the development of food reference materials with consensus values of various nutrients (Puwastien, Sungpuag and Judprasong, 1999; Puwastien, 2000), in collaboration with expert laboratories within and outside the Asia-Pacific region. Four food reference materials, namely rice flour (AS-FRM1), soybean flour (AS≠FRM2), cereal-soy (AS-FRM3) and fish flour-1 (AS-FRM4), with consensus values of main nutrients and minerals, were developed and are now available from the ASEANFOODS Regional Data Centre. These reference materials have been used for laboratory quality control programmes and as test materials for laboratory performance studies in ASEAN (Association of Southeast Asian Nations) and other developing countries.
It may prove impossible to produce reference standards for some nutrients contained within a complex food matrix. A mixture of pure substances can be prepared but cannot simulate the physical properties or the interrelationships of components within such foods. In the absence of an SRM, a laboratory routinely performing certain types of determinations should provide itself with working standard materials (in-house standards); these consist of a large amount of a homogeneous product (with great care taken to achieve homogeneity) dispensed into small, sealed bottles and stored under conditions that prevent deterioration (Southgate, 1995). Portions of this material should be analysed periodically along with each analysis or series of analyses, and the results monitored by the use of control chart techniques. An example of a local standard “fresh” food reference material produced in Sweden, canned meat with certified values for moisture, ash, fat, nitrogen, sodium, sodium chloride and hydroxyproline contents, is described by Torelm et al. (1990). Other in-house standards can be developed and validated against purchased RMs, and this is useful when large purchases of costly RMs are out of reach.
The control chart is “a graphical chart with control limits and plotted values of some statistical measure for a series of samples or subgroups. A central line is commonly shown” (American Society for Quality Control, 1973). The results of a laboratory test are plotted on the vertical axis, versus time (in hours, days, etc.) plotted on the horizontal axis. Each laboratory test should be checked frequently, and the horizontal scale should be wide enough to hold up to three months of data. Since the control chart is a tool providing “real-time” analysis and feedback information, it should cover a sufficient period of time and provide sufficient data to indicate trends, “runs” above and below the central line, or any other manifestation of lack of randomness (Mandel and Nanni, 1978; Taylor, 1987).
Non-segregating powders, such as non-fat dry milk, gelatin and flour, have been proposed for use as in-house standards. Powder mixes for parenteral feeding are used routinely by at least one laboratory that runs a nationwide quality control programme (Ekstrom et al., 1984).
Constituents that occur only in fat create problems because they are not stable indefinitely, even at low-temperature storage, and antioxidants added to stabilize lipid components can interfere in analyses. One solution is to store high-lipid foods under nitrogen. However, in general, the RM should be renewed periodically, with provision for old and new standards to be analysed simultaneously as a further check.
When an SRM or in-house standard is available, it provides the most efficient method for regularly monitoring the performance of the laboratory's technique. The inclusion of a standard material in a series of determinations is considerably simpler than many of the other techniques described here. Standard samples carried through the regular analytical routines promptly will alert laboratory personnel to any problems, permitting immediate corrective action.
2. Normal (routine) samples. If an analysis is to be carried out on a substrate that is new to the laboratory, the selected method should be applied to a series of routine food samples containing a fairly wide range of the constituent of interest. If such a series is not available, a set should be prepared by the careful blending of known amounts of the constituent with a food sample of known composition. Direct addition of small quantities of a constituent to large weights of a food should not be attempted; low levels should be obtained by serial dilution, preferably starting with a solution of the constituent. The nature of the solvent, and whether or not the solvent is removed, will depend on the nature of the substance and the substrate. If the food sample cannot be fortified, the addition of known amounts of the constituent should be made at the earliest possible step in the method. The most useful type of series for validation is prepared from two samples of the same particle size (in the case of solids), one with a high level of the constituent of interest and the other with a low level. Analytical samples containing intermediate concentrations are prepared by weighing and mixing appropriate amounts of the two food samples.
3. Analytical check sample series. Certain organizations provide, on a continuing basis, food samples designed to check the stability and reliability of the analyses performed in member laboratories. Some of these samples, which may be of particular use to analysts carrying out food composition work, are detailed by Horwitz et al. (1978) and Wolf and Ihnat (1985a).
4. Authentic samples. It is sometimes helpful to analyse sets of samples that can be considered to be authentic representations of the foods concerned and whose composition is fully described in the literature, e.g. cow milk, wheat flour, etc.
5. Food samples previously analysed by a different method. When introducing an unfamiliar or new method, it is helpful to re-analyse food samples that have previously been analysed by another, established method. Such samples should be analysed by replicate determinations, and then re-analysed after accurate dilution with some inert material such as water, oil or sand. If replicates and differences between samples are satisfactory, it is usually safe to proceed.
6. Internal methods of checking reliability. The wide variety of commodities analysed in food composition studies usually precludes the immediate availability of reference standards, previously analysed samples, authentic samples or even normal samples. This provides a singular challenge to the analyst to prove the validity of the values obtained. Replicate determinations are an obvious choice. Reproducible replicates, particularly if the replicate analytical portions are of unequal size, usually indicate that no gross mistakes are being made, although they do not rule out consistent errors. Other internal methods of checking performance involve the preparation of a series of fabricated samples, the method of standard additions
and check analyses by different analysts, methods and laboratories. Some of these internal methods are discussed in more detail in Chapters 6 and 7 and below.
Replicate determinations. Both precision and accuracy are assessed by means of replicate assays on portions of the same food sample (which are assumed to be stable and identical regarding the quantity of analyte being investigated). In statistical terminology, the replicate results are considered as random samples from a hypothetical population of replicates; the mean (as well as other measures of location or central tendency) of these samples reflects the performance of the method with respect to accuracy, and the standard deviation (as well as other measures of dispersion) reflects its precision.
Duplicate analyses are normally the minimum required for food composition studies. Agreement between duplicates should fall within the established precision of the method. Where agreement is outside these limits, additional replications are necessary. The mean should then be calculated on the basis of all the results, unless there are very persuasive reasons for excluding certain replicate values. It is not possible to make hard and fast rules for precision; guidelines must be developed for each nutrient, at the levels expected in each food matrix.
Recovery studies. When a constituent is available as a well-characterized material with known purity, it is possible to conduct recovery studies in which a defined amount of the constituent is added to portions of the food being analysed. Analysis of the food alone and with the added constituent can be used to calculate recovery of the added constituent (or “spike”). If a range of additions are made, effects of concentration can be measured. Recovery of an added constituent often gives a misleading indication, however, of the measurement of the constituent naturally occurring within the food matrix. If no materials are available for fortification, it may be necessary to fortify portions of the food sample itself, using the method of standard additions (see below).
In either case – a series of samples with added material or a sample enriched by a series of additions – the calculated concentrations after analysis should be a straight-line function of the added concentrations. To be classified as satisfactory, recoveries of more than 90 percent are necessary.
Wolf (1982), stating that the method of standard additions is “used as a panacea for matrix effects”, cautions that “care must be taken that this technique is not misused. A basic assumption ... is that the element added to the sample completely interchanges chemically with the endogenous element and that the two react identically to the matrix. It is often difficult to validate this assumption. Also, the method of standard additions does not correct for spectral interferences, where the matrix introduces spurious signals to the detection system ... The method of additions also assumes a linear response curve in the range of the additions.” He concludes, however, that “the method of additions can be useful when the matrix effect has been fully identified as chemical in nature”, and the assumptions regarding interchange with endogenous elements have been validated.
Fortification of the sample itself is also inapplicable (despite apparently satisfactory recoveries) if the analyte is easy to recover when added as a pure material, but in its natural state is physically or chemically combined with other constituents of the sample making it difficult to recover. This problem frequently occurs when protein is present, as it is in most foods. The problem of extraction of the analyte is the most critical in this case.
Clearly, recovery tests have severe limitations as measures of the accuracy of recovery. Poor recovery indicates that the method is not behaving properly, but good recovery does not guarantee satisfactory performance.
Check calculations and analyses. Perhaps the most useful check procedures used in food composition studies are check calculations and analyses.
The first step is for another analyst to perform independently all the calculations of the first analyst. These checks should include all the secondary operations, such as derivation of equations, standardization of solutions, preparation of standard curves, measurement of recorder peaks and calibration of instruments. This practice is one of the most cost-effective operations in laboratory management, because of the high frequency of mathematical errors and simple mistakes.
A second cost-effective practice is preparation of a new standard curve from freshly prepared standard solutions. The new standard curve should correspond fairly well with the original. Improper preparation of standard solutions from incorrect calculations, weighing or aliquoting is a frequent source of error. Because they are unstable, dilute standard solutions should be freshly prepared from more concentrated solutions.
The best kind of check analysis is for a second, preferably more experienced, analyst to repeat the analysis by the same method on a separate portion of the same analytical sample. The analysis cannot be considered a check analysis if it starts beyond the initial stage, for example, with an aliquot of a wet oxidation digestion. Preparation of a new analytical sample from the original food sample is better, because it permits estimation of error introduced during preparation of the analytical sample.
Repetition by the same method is not satisfactory, however, when that method contains an inherent bias, or a bias is consistently introduced by some characteristic of the commodity being analysed. In these cases a check analysis using a method based on a different principle (if available) is desirable. This approach is usually used only when rare or uncommon foods are analysed and are found to contain a nutrient at unusually high or low levels. It will not reveal errors introduced in analytical sample preparation.
Another possibility, which should be used more commonly and not as a last resort, is to send a sample of the food to another laboratory for analysis as an unknown. The order of magnitude of the constituent may be indicated, in order to eliminate the need for exploratory analyses. Analysis by a second laboratory for occasional checking of normal samples (see above) is also a good way to maintain analyst proficiency in both laboratories. Exchange of food or analytical samples is particularly useful when a new laboratory or an unfamiliar method is being set up.
Blind analyses. Ideally, all food samples should be coded, and a series of concealed replicates should be prepared by an analyst who will not make the actual determinations, so that the analysis can be carried out free from bias.
The variations permitted between replicates by the same analyst and between analysts in the same laboratory should be established for each routine analytical procedure and type of food. In the case of a well-documented method, the results of collaborative studies provide sufficient criteria for acceptability of values. The variations within a laboratory should be smaller, or at least no larger, than the variations between laboratories. In principle there is no reason why they should differ, but in practice variations occur in equipment, reagents and the approaches of the individual analysts.
In studies of a method or in check analyses, replicates should be analysed in separate batches and on different days. Comparison of results obtained under these conditions sometimes reveals systematic errors.
The correct recording of results can be aided if standard data-recording systems are drawn up for the laboratory. Data sheets may be printed or photocopied and supplied for use by the bench workers. In laboratories where computers are used for data acquisition directly from instruments, a computerized system can be used. All laboratory records must be kept in a systematic and accessible fashion so that an “audit trail”, or search back through the records to identify sources of error, can be instituted when required.
Horwitz et al. (1978) mention the problems experienced by AOAC associate referees conducting interlaboratory studies of new and improved analytical methods. They comment on the number of reports from collaborators who incorrectly calculate results, failing in simple tasks such as correct measurement of recorder peaks and insertion of appropriate values into a proportional equation.
To meet the obvious need for arithmetical accuracy in the performance of calculations, laboratory instruction manuals should describe the logic of the calculations and should provide examples; this clarity will help ensure that data are recorded correctly and inserted appropriately into the correct equations.
When area calculation is done by hand, each chart should be clearly labelled with the identity of each peak, the basis for identification, the peak area, etc. to permit cross-reference to laboratory notebooks. Rubber date-stamps are useful, and for some analyses a specially prepared rubber stamp may provide a convenient guide for entering peak identification, etc. on charts.
To eliminate calculation errors, a second person should ideally review the original raw data – recorder charts, meter readings, weights, volumes and times – and check the calculations. For chromatographic traces or spectral charts, the proper choice of peaks should be reviewed and compared with peaks of standards. This is also important when computing integrators are used if the printout is separated from the chart itself. The printer peak areas must be equated with the peaks, and retention times must be checked.
Charts should also be examined to ensure that instruments functioned properly, that there were no interfering materials, that peaks were all adequately resolved or separated, that appropriate sensitivities were used, and that blanks and controls were properly chosen and used.
When only one or a few samples of a given commodity are examined, little evidence is available from which to judge the reliability of the results; it becomes even more important to use proper checks on procedures at all stages.
A final check on the suitability and reliability of reported results lies in their consistency with previously reported values, with the literature, and with the known attributes of method performance.
Once an analytical result is obtained by a valid method of analysis, properly performed on a homogeneous analytical portion, several steps must be taken to ensure that the results are correctly interpreted in the context of the purpose for which analysis was carried out.
All values, whether expected or unexpected, should be subjected to scrutiny. Although the common practice of comparing new information with previously published values for the same food is useful, it can be a source of bias if the analyses are repeated only for deviant values; there may be a tendency to accept only data that conform to established values. Nonetheless, any samples producing unusually high or low results should be subjected to repeat analyses and specific validation, along with a few foods that yielded the expected values.
If the unexpected values are validated analytically, the collection, handling or preparation of the food sample must be investigated. For example, any high values for minerals may be due to contamination in the laboratory (perhaps by a mill or homogenizer). In these cases, the analysis must be repeated in such a way that contamination does not occur. If all steps in the laboratory are shown to be non-contaminating, then one must consider possible sources of contamination in the environment of the plant or animal from which the food was obtained. If the food sample was collected in the cooked state, one must consider possible sources of contamination during cooking (e.g. iron pot, metal skewer, or an iron plate or roasting grid). If the food sample was prepared and collected in a way that represents the food as it is usually available to the community, then the contamination may be regarded as contributing a real and representative value to the food. However, since contamination arising from the environment or during cooking does not necessarily contribute to the usual composition of the food, attention should be drawn to these unusual values and their nutritional significance in any written reports.
Some simple calculations can be applied as approximate checks on the appropriateness of values. For example, the summed quantities of ash constituents must not exceed the total ash, nor should the sum of the determined constituents exceed 100 percent of the weight of the analytical sample in a complete analysis (summations falling within the range of 97 to 103 percent of analytical sample weight are generally acceptable). When complete analyses are available, common-sense tests such as these can assist in determining the reliability or, more frequently, the unreliability of the reported results.
All reports of analytical data, published or unpublished, must list the procedures that were carried out in the laboratory to ensure the quality of the data (e.g. the levels of recoveries, the use of SRMs or other standards).
As a general rule, correction factors should not be applied in calculating the final reported result. Usually, the actual value found and the recovery factors determined in the course of analysis should both be reported. Recovery factors are usually not constant from one run to the next, and their variability is an important performance-related factor used in interpreting the results of the analysis. When the correction factor varies with the type of food, the appropriate factor should be used and then calculated to a “recovery-corrected” basis. As previously indicated, the easiest way to avoid mistakes and ambiguity is to report the actual findings, the recovery factors and the corrected values.
A continuous system of quality control is difficult to maintain, but is essential. In a laboratory with a workload consisting of a variety of foods analysed for a variety of constituents, effort must be concentrated on as many applicable quality control procedures as possible. This situation requires use of standard and previously analysed food samples, or of food samples analysed in other laboratories to be used as simultaneous controls, and greater than normal participation in check sample series and collaborative trials. Analysts and laboratories that consistently maintain high-quality performance in check sample series and in collaborative trials would be expected to produce more reliable results in day-to-day routine analysis than laboratories that cannot produce evidence of the adequacy of their performance.
The consequences of failure to maintain a quality assurance programme justify the time and expense of its implementation. Incorrect data may have important consequences for consumers and for food composition data programmes; if the data are rejected by increasingly sophisticated database compilers, the laboratory that produced them loses credibility.