Risk assessment is seen primarily as a method of systematically organizing scientific and technical information, and its associated uncertainties, to answer specific questions about health risks. It requires evaluation of relevant information, and selection of the models to be used in drawing inferences from that information. Further, it requires explicit recognition of uncertainties and, when appropriate, acknowledgement that alternative interpretations of the available data may be scientifically plausible.
The steps involved in risk assessment of chemical hazards have been discussed at greater length elsewhere (NRC, 1983, 1994). Risk assessment is subject to uncertainties related to data and to the selection of the appropriate model. Uncertainties are discussed in further detail later in this report. However, it should be pointed out at this juncture that data uncertainties arise both from limitation on the amount of data available and from evaluation and interpretation of actual data obtained from epidemiological and toxicological studies. Model uncertainties arise whenever attempts are made to use data concerning the occurrence of certain phenomena obtained under one set of conditions, to make estimations or predictions about phenomena likely to occur under other sets of conditions for which data are not available.
The process of risk assessment requires adequate toxicological information preferably based on standardized testing protocols which have been accepted by the international community. In addition, a credible risk assessment requires at least a minimum data set which has been already defined by others, e.g. JECFA, JMPR, EPA, FDA, OECD.
Depending upon the chemical, empirically-based answers to toxicological questions may be available for the purpose of risk assessment. However, in no case will the scientific information be comprehensive enough to provide a high degree of certainty. When several sets of animal toxicology data are available, there are usually insufficient data to identify the set (i.e. species, strain, toxicity end-point) that best predicts human response. As a result, it has become traditional to rely on toxic responses which occur at the lowest dose in a study of acceptable quality.
Minimum data requirements for risk assessment are difficult to specify in advance. Hazard, dose-response, and exposure data bases for substances that may become subjects for risk assessment vary enormously in size, scope, and quality. In some instances, the data may be very limited and practically impossible to obtain. The latter is especially the case for contaminants and naturally occurring substances. When a risk assessment is necessary, risk assessors are required to make the best use of whatever information is available, and to deal explicitly with data uncertainties. In cases where this is not possible, risk assessors should provide the reasons for such judgements. Perhaps the appropriate option is to leave the question of minimum data requirements open to such case-by-case judgements.
Other issues related to the process of risk assessment include the use of default assumptions to fill knowledge and data gaps. This provides the advantage of ensuring consistency in approach and minimizing or eliminating case-by-case manipulations of the conduct of risk assessment to meet predetermined risk management objectives. One major disadvantage, however, is the potential for displacement of scientific judgement by rigid guidelines. One intermediate approach is to allow risk assessors to replace defaults in specific cases of chemicals for which relevant scientific data are available to support alternatives. Specific and explicit justification for any such departures should be provided.
Because data are often insufficient, hazard identification is best conducted using the weight-of-evidence approach. The approach requires an adequate and documented review of relevant scientific information obtained from appropriate databases, peer-reviewed literature and, if available, unpublished studies from other sources, such as industry. This approach places emphasis on studies in the following order: epidemiological studies, animal toxicological studies, in vitro assays and, lastly, quantitative structure-activity relationships.
During the design of epidemiological studies, or where positive epidemiological data are available, consideration must be given to the variability in human susceptibility; genetic predisposition, age-related and gender-related susceptibility, and the impacts of factors such as socio-economic status, nutritional status, and other possibly confounding factors.
Due to the cost of epidemiological studies and due to the paucity of data such studies provide, hazard identification will ordinarily need to rely on data derived from animal and in vitro studies.
Adequate minimum data sets are generally available for food safety risk assessment and should be used. These include specification of the number of species/strains/stocks, use of more than one sex, appropriate selection of doses (see below), route of exposure, and adequate sample size. In general, the source of data (published studies, unpublished studies, corporate data, etc.) is not a point of great concern as long as studies are transparent and can be demonstrated to conform to GLP and QA/QC procedures.
Animal data from long-term (chronic) studies are critical, and should address significant toxicological effects/end-points, including cancer, reproductive/developmental effects, neurotoxic effects, immunotoxic effects, and others. Animal data from short-term (acute) toxicity studies will also be useful and should be generated. Animal studies should facilitate identification of the range of toxicological effects/end-points (including those listed). Data on the relationship between toxicity and essentiality should be gathered for those substances which are required to meet nutritional requirements, e.g. copper, zinc, and iron. Animal toxicological studies should be designed to identify a NOEL, a no-observed-adverse-effect level (NOAEL) or a benchmark dose; that is, doses should be selected to identify these end-points. Doses should also be selected at levels high enough to reduce the likelihood of false-negatives as much as possible, while considering issues such as metabolic saturation, cytogenic and mitogenic induced cell proliferation, etc. Presently, the selection of the highest dose for chronic rodent bioassays is being debated. Discussion is focused on the selection, use, and interpretation of data from studies which employ the Maximum Tolerated Dose (MTD). Mid-range doses should be selected to provide relevant information on the shape of the dose-response curve.
Animal studies should, where possible, identify not only potential adverse effects for human health but also provide information on the relevance of these effects for human risk. Information on relevance may be provided by studies that characterize the mechanism of action, the relationship between administered and delivered dose, and by pharmacokinetic and pharmacodynamic studies.
Mechanistic data may be supplemented by data from in vitro studies, such as information on genotoxicity derived from reversion assays or other similar assays. These studies should be conducted following GLP, and other widely accepted protocols. However, data from in vitro studies should not be used as the sole source of information to predict human risk.
The results of in vivo and in vitro studies can enhance the understanding of mechanisms and pharmacokinetics/dynamics. However, such information may not be available in many cases and the risk assessment process should not be delayed pending development of mechanistic and pharmacokinetic/dynamic data.
Information on administered versus delivered dose will be useful as part of the evaluation of mechanism and pharmacokinetic data. The assessment should also consider information on chemical speciation (administered dose) and metabolite toxicity (delivered dose). As part of this consideration, the issue of chemical bioavailability should be addressed (bioavailability of parent compound, metabolites, etc.) with specific consideration given to absorption across the appropriate membrane (i.e., the gut), transport to systemic circulation, and, ultimately, to the target organ.
Finally, structure-activity relationships may be useful to increase the weight-of-evidence for human health hazards identification. Where classes of compounds are of interest (e.g. polycyclic aromatic hydrocarbons, polychlorinated biphenyls and dioxins), and where adequate toxicological data are available on one or more members of the class, a toxic equivalence approach may be useful to predict the human health hazard associated with exposure to other members of the class.
In recent years it has been possible to discriminate between carcinogens and to identify a category of non-genotoxic carcinogens that are themselves not capable of producing mutations but act at later stages of the cancer process on cells already "initiated" by other carcinogens or other processes e.g. radiation. In contrast, other carcinogens induce genetic alterations in somatic cells with activation of oncogenes and/or inactivation of cancer suppressor genes. Thus, genotoxic carcinogens are defined as chemicals which can cause genetic alterations in target cells, either directly or indirectly. While the major target of genotoxic carcinogens is genetic material, non-genotoxic carcinogens act at extra-genetic sites, leading presumably to enhanced cell proliferation and/or sustained hyperfunction/dysfunction at the target sites. Regarding species differences in carcinogenic effects, a large body of data has been reported indicating that quantitative differences exist in both genotoxic carcinogens and non-genotoxic carcinogens. In addition, certain non-genotoxic carcinogens, called rodent-specific carcinogens can be raised as examples of substances for which there are qualitative differences in the ultimate carcinogenic effects. In contrast, no such clear-cut examples have been reported for genotoxic carcinogens.
Toxicologists and geneticists have devised tests to detect chemicals capable of causing mutations in DNA; the Ames test is a well known example. Several such tests, both in vitro and in vivo tests are used, typically in the form of a battery, to determine the mutagenic potential of chemicals. While the exact tests to include in such a battery may be debatable, in general these tests have been useful in distinguishing between genotoxic and non-genotoxic carcinogens.
Food safety authorities in many countries now make a distinction between genotoxic and non-genotoxic carcinogens. While this distinction cannot be applied in all instances due to insufficient information or knowledge on carcinogenesis, the concept can still contribute to the establishment of evaluation strategies for cancer risks posed by exposure to chemicals. In principle, non-genotoxic carcinogens may be regulated using a threshold approach, such as the "NOEL-safety factor" approach. In addition to the demonstration that the substance is not likely to be a genotoxic agent, scientific information is often required on the mechanism of carcinogenicity.
A safe level or Acceptable Daily Intake (ADI) is derived from an experimental NOEL or NOAEL by applying appropriate safety factors. The conceptual basis for their use is that thresholds will exist at reasonably comparable doses in both humans and experimental animals. For humans, however, sensitivity may be greater, genetic outbreeding may be larger and dietary habits may be more variable. As a consequence, a safety factor is applied by JECFA and JMPR to take into account these uncertainties. A safety factor of 100 is typically applied when data from long-term animal studies are available but other safety factors are used by different health agencies. JECFA also uses a larger safety factor when the data are minimal or when the ADI is assigned on a temporary basis. Other health agencies adjust the ADI for the severity or irreversibility of the effect. These differences in ADI values constitute an important risk management issue which deserves some attention by appropriate international bodies.
The message communicated with an ADI is that there is no significant risk if the chemical is ingested at or below the ADI. The safety factor, as indicated, is selected to subsume anticipated variations in human responses. It is, of course, theoretically possible that some individuals are even more sensitive than provided for by the safety factor. The safety factor approach, like the quantitative risk approach discussed below, cannot guarantee absolute safety for everyone.
Another approach to ADI development has been to move away from reliance on the NOEL/NOAEL and toward the use of a lower effective dose, such as ED 10 or ED05. This approach, called the benchmark dose, draws more heavily on data near the observed dose-response range, but is still subject to the application of safety factors. Thus, while it may allow a more accurate prediction of low dose risk, the benchmark dose-based ADI may not differ significantly from a NOEL/NOAEL-based ADI. Special population groups, like children, are protected by an appropriate choice of the intraspecies conversion factor and by special consideration of their exposures, if necessary (see 5.4 Exposure assessment).
Various extrapolation models have been utilized for this purpose. Currently models use experimental measurements of tumour incidence and dose and virtually no other biological information. None of these models have been validated beyond the experimental range. No correction for high dose toxicity, enhanced cellular proliferation, or DNA repair is made. For these reasons, the current linear models are considered to be conservative estimates of risk. This is usually expressed by characterizing the risks generated by such models as "plausible upper bounds" or "worst-case estimates". It is acknowledged by many regulatory agencies that actual or probable human risks are not being predicted. Some countries attempt to reduce the conservatism inherent in linear extrapolation by using non-linear models. An essential component of this approach is the determination of an acceptable risk level. In the USA, FDA and EPA have chosen a risk level of one in a million (10-6). This acceptable level was chosen because it was considered to represent an insignificant risk. But the choice of a risk level is ultimately a risk management decision for each country to decide.
For food additives and residues of pesticides and veterinary drugs, a fixed level of risk is practical as the substances can be disallowed if the estimated risk exceeds the regulatory acceptable level. But for contaminants, including discontinued pesticides which have become environmental contaminants, an established acceptable level can easily be exceeded. For example, in the USA, dioxins are estimated to present a worst case risk of around 10-4. For ubiquitous carcinogenic contaminants like polycyclic aromatic hydrocarbons and nitrosamines, the l0-6.risk level is also exceeded.
Dietary intake determinations can be relatively straight-forward for additives, pesticides and veterinary drugs as the relevant foods and their use levels are specified by their approved conditions of use. However, the actual levels of additives and residues of pesticides and veterinary drugs present in foods are often well below the maximum levels permitted. In regard to residues of pesticides and veterinary drugs, levels on or in food are often totally absent because only a portion of the crop/animal population is usually treated. Data on the levels of food additives in foodstuffs can be obtained from the manufacturers. The dietary intake of contaminants requires information on their distribution in foods that can only be obtained by analyzing representative samples of foods with sufficiently sensitive and reliable analytical methods. Guidelines for establishing or strengthening national food contamination monitoring programmes have been elaborated (GEMS/Food, 1979).
Maximum Residue Limits (MRLs) for pesticides and veterinary drugs and Maximum Levels for additives can be established from their conditions of use. In the simplest case, a food additive used at a specific level would be stable in the food until consumption. The Maximum Level would then equal the intake level. However, in many cases, the amount of the chemical of interest may change prior to consumption. For example, food additives may degrade during storage or react with the food. Pesticide residues in raw agricultural products may degrade/accumulate during further processing. The fate of veterinary drug residues in food products is influenced by metabolism, kinetics, distribution and withdrawal periods required for treated animals.
The establishment of MRLs must take into account any changes in the nature or level of the residue that may occur prior to a commodity entering commerce or that may occur under any anticipated conditions of subsequent use. Contaminants have no intended technicological effect in the food and guideline levels are usually set as low as reasonably achievable.
The theoretical total dietary intake of additives, pesticides and veterinary drugs must be below their corresponding ADIs. Frequently, the actual intake is well below the ADI. Setting guideline levels for contaminants present special problems. There is usually a paucity of data to establish a provisional tolerable intake. On occasion, the levels of the contaminants are higher than what an established provisional tolerable intake would permit. In these cases, the guideline levels are set on economic and/or technical considerations.
Reliable food intake data are essential for exposure assessments based on measuring levels of chemical agents in food. Detailed food consumption data for the average and median consumer as well as for different population groups are important for assessing exposure, particularly by sensitive groups. In addition, comparable food consumption data, particularly with respect to staple foods from different regions of the world are essential for developing an international risk assessment approach to food safety.
GEMS/Food currently maintains a database of five regional diets as well as a composite "global" diet. Daily dietary intakes of nearly 250 individual primary and semi-processed food commodities are available. The African, Asian, East Mediterranean, European and Latin American regional diets are based on selected national data from FAO Food Balance Sheets. Consumption data derived using this approach provide no information on extreme consumers. No information is available in GEMS/Food on the intake of food additives although intakes in developed countries are anticipated to be greater than in developing countries because of the higher portion of processed foods in the diet.
At the risk characterization step, the uncertainties involved in each step of the risk assessment process should be described. Uncertainty in risk characterization will reflect the uncertainties in the preceding steps. The extrapolation of results of animal studies to the human situation may produce two types of uncertainties: (i) uncertainties with respect to the relevance of the experimental findings to the humans. For example, forestomach tumours in rats fed butylated hydroxyanisole (BHA) and neurotoxic effects in mice produced by aspartame may not have human parallels; and, (ii) uncertainties with respect to specific human sensitivity for effects of a chemical that cannot be studied in experimental animals. In this case, hypersensitivity to glutamate is an example. In practice, these uncertainties are dealt with by expert judgement and by additional studies, preferably in humans. These studies may be performed during the pre-marketing phase as well as during the post-marketing phase.