Hazard characterizations are typically developed by compiling information from a variety of data sources, using a plethora of test protocols. Each of these data sources contributes in varying degrees to an understanding of the pathogen-host-matrix interactions that influence the potential public health risks attributable to different disease agents. An appreciation of the strengths and limitations of the various data sources is critical to selecting appropriate data for use, and to establishing the uncertainty associated with dose-response models that are developed from different data sets and test protocols.
Active data collection is required, because reliance on passive data submission or data in published form does not usually provide enough information in sufficient detail to construct dose-response models. Relevant data come preferably from peer-reviewed journals. Given the current lack of data for hazard characterization, it is also advisable to evaluate the availability of unpublished, high-quality data sources. Risk assessors should communicate with experimenters, epidemiologists, food or water safety regulatory persons, and others who may have useful data that could contribute to the analysis. An example of such is the outbreak information collected by the Japanese Ministry of Health and which was used for dose-response modelling of Salmonella (FAO/WHO, 2002a). When such data are used, the criteria and results of evaluation must be carefully documented. If using material published on the Internet, care should be taken to establish the provenance, validity and reliability of the data, and the original source, if known.
Understanding the characteristics of data sources is important to the selection and interpretation of data. Risk assessors often use data for a purpose other than that for which it was originally intended. Risk assessors and modellers need to know the means by which the data they use are collected, and the purpose of their collection. The properties of the available data will depend on the perspective of the researchers generating the data (e.g. experimenter versus epidemiologist). Therefore, knowledge of the source and original purpose of the available data sets is important in the development of dose-response models. The following sections attempt to capture in brief the strengths and limitations of each of several classes of data sources.
When there is a common-source outbreak of foodborne or waterborne disease of sufficient magnitude, an epidemiological investigation is generally undertaken to identify the cause of the problem, to limit its further spread, and to provide recommendations on how the problem can be prevented in the future. An outbreak of confirmed etiology that affects a clearly defined group can provide particularly complete information about the range of illness that a pathogen can cause, particular behaviour or other host characteristics that may increase or decrease the risk, and - if there is clinical follow up - the risk of sequelae. When the outbreak is traced to a food or water source that can be quantitatively cultured under circumstances that allow the original dose to be estimated, the actual dose-response can be measured. Even when that is not possible, dose-effect relations can often be observed that show variation in clinical response to changes in relative dose, and is part of the classic approach to an outbreak investigation. This may include looking for higher attack rates among persons who consumed more of the implicated vehicle, but may also include variation in symptom prevalence and complications. There are good public health reasons for gathering information on the amount of the implicated food or water consumed. An outbreak that is characterized by a low attack rate in a very large population may be an opportunity to define the host-response to very low doses of a pathogen, if the actual level of contamination in the food can be measured. In addition, data from outbreaks are the ultimate "anchor" for dose-response models and are an important way to validate risk assessments.
An outbreak investigation can capture the diversity of host response to a single pathogenic strain. This can include the definition of the full clinical spectrum of illness and infection, if a cohort of exposed individuals can be examined and tested for evidence of infection and illness, independent of whether they were ill enough to seek medical care or diagnose themselves. It also includes definition of subgroups at higher risk, and the behaviour, or other host factors, that may increase or decrease that risk, given a specific exposure. Collecting information on underlying illness or pre-existing treatments is routine in many outbreak investigations.
Obtaining highly specific details of the food source and its preparation in the outbreak setting is often possible, because of the focus on a single food or meal, and may suggest specific correlates of risk that cannot be determined in the routine evaluation of a single case. Often, the observations made in outbreaks suggest further specific applied research to determine the behaviour of the pathogen in that specific matrix, handled in a specific way. For example, after a large outbreak of shigellosis was traced to chopped parsley, it was determined that Shigella sonnei grows abundantly on parsley left at room temperature if the parsley is chopped, but does not multiply if the parsley is intact. Such observations are obviously important to someone modelling the significance of low-level contamination of parsley.
Where samples of the implicated food or water vehicle can be quantitatively assayed for the pathogen, in circumstances that allow estimation of the original dose, an outbreak investigation has been a useful way to determine the actual clinical response to a defined dose in the general population.
Follow-up investigations of a (large) cohort of cases identified in an outbreak may allow identification and quantification of the frequency of sequelae, and the association of sequelae with specific strains or subtypes of a pathogen.
If preparations have been made in advance, the outbreak may offer a setting for the evaluation of methods to diagnose infection, assess exposure or treat the infection.
The primary limitation is that the purpose and focus of outbreak investigations is to identify the source of the infection in order to prevent additional cases, rather than to collect a wide range of information. The case definitions and methods of the investigation are chosen for efficiency, and often do not include data that would be most useful in a hazard characterization, and may vary widely among different investigations. The primary goal of the investigation is to quickly identify the specific source(s) of infection, rather than to precisely quantify the magnitude of that risk. Key information that would allow data collected in an investigation to be useful for risk assessments is therefore often missing or incomplete. Estimates of dose or exposure in outbreaks may be inaccurate because:
It was not possible to obtain representative samples of the contaminated food or water.
If samples were obtained, they may have been held or handled in such a way after exposure occurred as to make meaningless the results of testing.
Laboratories involved in outbreak testing are mainly concerned with presence/absence, and may not be set up to conduct enumeration testing.
It is very difficult to detect and quantify viable organisms in the contaminated food or water (e.g. viable Cryptosporidium oocysts in water).
Estimates of water or food consumption by infected individuals, and of the variability therein, are poor.
There is inadequate knowledge concerning the health status of the exposed population, and the number of individuals who consumed food but did not become ill (a part of whom may have developed asymptomatic infection, whereas others were not infected at all).
The size of the total exposed population is uncertain.
In such instances, use of outbreak data to develop dose-response models generally requires assumptions concerning the missing information. Fairly elaborate exposure models may be necessary to reconstruct exposure under the conditions of the outbreak. If microbiological risk assessors and epidemiologists work together to develop more comprehensive outbreak investigation protocols, this should promote the collection of more pertinent information. This might also help to identify detailed information that was obtained during the outbreak investigation but was not reported.
Even when all needed information is available, the use of such data may bias the hazard characterization if there are differences in the characteristics of pathogen strains associated with outbreaks versus sporadic cases. The potential for such bias may be evaluated by more detailed microbiological studies on the distribution of growth, survival and virulence characteristics in outbreak and endemic strains.
Estimates of attack rate may be an overestimate when they are based on signs and symptoms rather than laboratory-confirmed cases. Alternatively, in a case-control study conducted to identify a specific food or water exposure in a general population, the attack rate may be difficult to estimate, and may be underestimated, depending on the thoroughness of case finding.
The reported findings depend strongly on the case-definition used. Case definitions may be based on clinical symptoms, on laboratory data or a combination thereof. The most efficient approach could be to choose a clinical case definition, and validate it with a sample of cases that are confirmed by laboratory tests. This may include some non-specific illnesses among the cases. In investigations that are limited to culture-confirmed cases, or cases infected with a specific subtype of the pathogen, investigators may miss many of the milder or non-diagnosed illness occurrences, and thus underestimate the risk. The purpose of the outbreak investigation may lead the investigators to choices that are not necessarily the best for hazard characterization.
Countries and several international organizations compile health statistics for infectious diseases, including those that are transmitted by foods and water. Such data are critical to adequately characterize microbial hazards. In addition, surveillance-based data have been used in conjunction with food survey data to estimate dose-response relations. It must be noted that, usually, analysis of such aggregated data requires many assumptions to be made, thus increasing uncertainty in results.
Annual health statistics provide one means of both anchoring and validating dose-response models. The effectiveness of dose-response models is typically assessed by combining them with exposure estimates and determining if they approximate the annual disease statistics for the hazard.
Using annual disease statistics to develop dose-response models has the advantage that it implicitly considers the entire population and the wide variety of factors that can influence the biological response. Also, illness results from exposure to a variety of different strains. These data also allow for the relatively rapid initial estimation of the dose-response relationship. This approach is highly cost-effective since the data are generated and complied for other purposes. Available databases often have sufficient detail to allow consideration of special subpopulations.
The primary limitation of the data is that they are highly dependent on the adequacy and sophistication of the surveillance system used to collect the information. Typically, public health surveillance for foodborne diseases depends on laboratory diagnosis. Thus it only captures those who were ill enough to seek care (and able to pay for it), and who provided samples for laboratory analysis. This can lead to a bias in hazard characterizations toward health consequences associated with the developed nations that have an extensive disease surveillance infrastructure. Within developed countries, the bias may be towards diseases with relatively high severity, that more frequently lead to medical diagnoses than mild, self-limiting diseases. International comparisons are difficult because a set of defined criteria for reporting is lacking at an international level. Another major limitation in the use of surveillance data is that it seldom includes accurate information on the attribution of disease to different food products, on the levels of disease agent in food and the number of individuals exposed. Use of such data to develop dose-response relations is also dependent on the adequacy of the exposure assessment, the identification of the portions of the population actually consuming the food or water, and the estimate of the segment of the population at increased risk.
The most obvious means for acquiring information on dose-response relations for foodborne and waterborne pathogenic microorganisms is to expose humans to the disease agent under controlled conditions. There have been a limited number of pathogens for which feeding studies using volunteers have been carried out. Most have been in conjunction with vaccine trials.
Using human volunteers is the most direct means of acquiring data that relates an exposure to a microbial hazard with an adverse response in human populations. If planned effectively, such studies can be conducted in conjunction with other clinical trials, such as the testing of vaccines. The results of the trials provide a direct means of observing the effects of the challenge dose on the integrated host defence response. The delivery matrix and the pathogen strain can be varied to evaluate food matrix and pathogen virulence effects.
There are severe ethical and economic limitations associated with the use of human volunteers. These studies are generally conducted only with primarily healthy individuals between the ages of 18 and 50, and thus do not examine the segments of the human population typically most at risk. Pathogens that are life threatening or that cause disease only in high-risk subpopulations are not amenable to volunteer studies. Typically, the studies investigate a limited number of doses with a limited number of volunteers per dose. The dose ranges are generally high to ensure a response in a significant portion of the test population, i.e. the doses are generally not in the region of most interest to risk assessors.
The process of (self-)selection of volunteers may induce bias that can affect interpretation of findings. Feeding studies are not a practical means to address strain virulence variation. The choice of strain is therefore a critical variable in such studies. Most feeding studies use only rudimentary immunological testing prior to exposure. More extensive testing could be useful in developing susceptibility biomarkers.
Usually, feeding studies involve only a few strains, which are often laboratory domesticated or collection strains and may not represent wild-type strains. In addition, the conditions of propagation and preparation immediately before administration are not usually standardized or reported, though these may affect tolerance to acid, heat or drying, as well as altering virulence. For example, passage of Vibrio cholerae through the gastrointestinal tract induces a hyperinfectious state, which is perpetuated even after purging into natural aquatic reservoirs. This phenotype is expressed transiently, and lost after growth in vitro (Merrel et al., 2002). In many trials with enteric organisms, they are administered orally with a buffering substance, specifically to neutralize the effect of gastric acidity, which does not directly translate into what the dose response would be if ingested in food or water.
In the development of experimental design, the following points need to be considered:
How is dose measured (both units of measurement and the process used to measure a dose)?
How do the units in which a dose is measured compare with the units of measurement for the pathogen in an environmental sample?
Total units measured in a dose may not all be viable units or infectious units.
Volunteers given replicate doses may not all receive the same amount of inoculum.
How is the inoculum administered? Does the protocol involve simultaneous addition of agents that alter gastric acidity or promote the passage of microorganisms through the stomach without exposure to gastric acid?
How do you know you dosed a naive volunteer (serum antibodies may have dropped to undetectable levels or the volunteer may have been previously infected with a similar pathogen that may not be detected by your serological test)?
How is infection defined?
What is the sensitivity and specificity of the assay used to determine infection?
How is illness defined?
When comparing the dose-response of two or more organisms, one must compare similar biological end-points, e.g. infection vs illness.
Biomarkers are measurements of host characteristics that indicate exposure of a population to a hazard or the extent of adverse effect caused by the hazard. They are generally minimally invasive techniques that have been developed to assess the status of the host. The United States National Academy of Science has classified biomarkers into three classes, as follows:
Biomarker of exposure - an exogenous substance or its metabolite, or the product of an interaction between a xenobiotic agent and some target molecule or cell, that is measured in a compartment within an organism.
Biomarker of effect - a measurable biochemical, physiological or other alteration within an organism that, depending on magnitude, can be recognized as an established or potential health impairment or disease.
Biomarker of susceptibility - an indicator of an inherent or acquired limitation of an organism's ability to respond to the challenge of exposure to a specific xenobiotic substance.
Even though this classification was developed against the background of risk assessment of toxic chemicals, these principles can be useful in interpreting data on pathogens.
These techniques provide a means of acquiring biologically meaningful data while minimizing some of the limitations associated with various techniques involving human studies. Typically, biomarkers are measures that can be acquired with minimum invasiveness while simultaneously providing a quantitative measure of a response that has been linked to the disease state. As such, they have the potential to increase the number of replicates or doses that can be considered, or to provide a means by which objectivity can be improved, and increased precision and reproducibility of epidemiological or clinical data can be achieved. Biomarkers may also provide a means for understanding the underlying factors used in hazard characterization. A biomarker response may be observed after exposure to doses that do not necessarily cause illness (or infection). Biomarkers can be used either to identify susceptible populations or to evaluate the differential response in different population subgroups.
It should also be noted that the most useful biomarkers are linked to illness by a defined mechanism, that is, the biological response has a relationship to the disease process or clinical symptom. If a biomarker is known to correlate with illness or exposure, then this information may be useful in measuring dose-response relationships, even if the subjects do not develop clinical symptoms. Biomarkers such as these can be used to link animal studies with human studies for the purposes of dose-response modelling. This is potentially useful because animal models may not produce clinical symptoms similar to humans. In which case, a biomarker may serve as a surrogate end-point in the animal.
Biomarkers are often indicators of infection, illness, severity, duration, etc. As such, there is a need to establish a correlation between the amplitude of the biomarker response and illness conditions. Biomarkers primarily provide information on the host status, unless protocols are specifically designed to assess the effects of different pathogen isolates or matrices.
The only currently available biomarkers for foodborne and waterborne pathogens are serological assays. The main limitation for such assays is that, in general, the humoral immune response to bacterial and parasitic infections is limited, transient and non-specific. For example, efforts to develop an immunological assay for Escherichia coli O157 infections have shown that a distinctive serological response to the O antigen is seen typically in the most severe cases, but is often absent in cases of culture-confirmed diarrhoea without blood. In contrast, serological assays are often quite good for viruses. Other biomarkers, such as counts of subsets of white blood cells or production of gaseous oxides of nitrogen are possible, but have not been tested extensively in human populations.
Intervention studies are human trials where the impact of a hazard is evaluated by reducing exposure for a defined sample of a population. The incidence of disease or the frequency of a related biomarker is then compared to a control population to assess the magnitude of the response differential for the two levels of exposure.
Intervention studies have the advantage of studying an actual population under conditions that are identical to or that closely approach those of the general population. In such a study, the range of host variability is accommodated. These studies are particularly useful in evaluating long-term exposures to levels of pathogens to which the consumer is likely to be subjected. Since intervention studies examine the diminution of an effect in the experimental group, the identified parameters would implicitly include the pathogen, host and food-matrix factors that influence the control groups. Potentially, one could manipulate the degree of exposure (dose) by manipulating the stringency of the intervention.
Since exposure for the control group occurs as a result of normal exposure, the pathogen, host and food-matrix effects are not amenable to manipulation. Great care must be given to setting up appropriate controls, and in actively diagnosing biological responses of interest in both the test and control populations. It is often the case that intervention studies result in declines in response that are less than those predicted by the initial exposure. This is often due to the identification of a second source of exposure or an overestimation of the efficacy of the intervention. However, such data by itself is often of interest.
Testable interventions - i.e. feasible in terms of technical, cultural and social issues - are "conservative" in that there are ethical boundaries. Thus they must be implemented within a defined population and, apart from being technically feasible, must be socially acceptable and compatible with the preferences and technical abilities of this population.
Animal studies are used to overcome many of the logistical and ethical limitations that are associated with human-volunteer feeding studies. There are a large variety of different animal models that are used extensively to understand the pathogen, host and matrix factors that affect characteristics of foodborne and waterborne disease, including the establishment of dose-response relations.
The use of surrogate animals to characterize microbial hazards and establish dose-response relations provides a means for eliminating a number of the limitations of human-volunteer studies while still maintaining the use of intact animals to examine disease processes. A number of animal models are relatively inexpensive, thus increasing the potential for testing a variety of strains and increased numbers of replicates and doses. The animals are generally maintained under much more controlled conditions than human subjects. Immunodeficient animal strains and techniques for suppressing the immune system and other host defences are available and provide a means for characterizing the response in special subpopulations. Testing can be conducted directly on animal subpopulations such as neonates, aged or pregnant populations. Different food vehicles can be investigated readily.
The major limitation is that the response in the animal model has to be correlated with that in humans. There is seldom a direct correlation between the response in humans and that in animals. Often, differences between the anatomy and physiology of humans and animal species lead to substantial differences in dose-response relations and the animal's response to disease. For a number of diseases, there is no good animal model. Several highly effective models (e.g. primates or pigs) can be expensive, and may be limited in the number of animals that can be used per dose group. Some animals used as surrogates are highly inbred and consequently lack genetic diversity. Likewise, they are healthy and usually of a specific age and weight range. As such, they generally do not reflect the general population of animals of that species, let alone the human population. Ethical considerations in many countries limit the range of biological end-points that can be studied.
When surrogate pathogens or surrogate animal models are used, the biological basis for the use of the surrogate must be clear.
Using data obtained with animal models to predict health effects in humans could take advantage of the use of appropriate biomarkers.
It is important to use pathogen strains that are identical or closely related to the strain of concern for humans, because, even within the same species and subspecies, different strains of pathogens may have different characteristics that cause variation in their abilities to enter and infect the host and cause illness.
In vitro studies involve the use of cell, tissue or organ cultures and related biological samples to characterize the effect of the pathogen on the host. They are of most use for qualitative investigations of pathogen virulence, but may also be used to evaluate in detail the effects of defined factors on the disease process.
In vitro techniques can readily relate the characteristics of a biological response with specific virulence factors (genetic markers, surface characteristics and growth potential) under controlled conditions. This includes the use of different host cells or tissue cultures to represent different population groups, and manipulation of the environment under which the host cells or tissues are exposed to the pathogen, in order to characterize differences in dose-response relations between general and special populations. In vitro techniques can be used to investigate the relations between matrix effects and the expression of virulence markers. Large numbers of replicates and doses can be studied under highly controlled conditions. These techniques can be used to readily compare multiple species and cell types to validate relationships between humans and surrogate animals. They are particularly useful as a means of providing information concerning the mechanistic basis for dose-response relations.
The primary limitation is the indirect nature of information concerning dose-response relations. One cannot directly relate the effects observed with isolated cells and tissues to disease conditions that are observed within intact humans, such as the effect of integrated host defences. To compare with humans, there is need for a means to relate the quantitative relations observed in the in vitro system to those observed in the host. These types of studies are usually limited to providing details of factors affecting dose-response relations and to augmenting the hazard characterization, but are unlikely to be a direct means of establishing dose-response models useful for risk assessments. For many organisms, the specific virulence mechanisms and markers involved are unknown, and may vary between strains of the same species.
Expert elicitation is a formal approach to the acquisition and use of expert opinions, in the absence of or to augment available data.
When there is a lack of the specific data needed to develop dose-response relations, but there are scientific experts with knowledge and experience pertinent to the elucidation of the information required, expert elicitation provides a means of acquiring and using this information so that consideration of dose-response relations can be initiated. This can involve the development of a distribution for a parameter in a model for which there is no, little or inconsistent numerical data, through the use of accepted processes that outline the lines of evidence or weight of evidence for generation of the opinion and use of the results. It is generally not expensive, particularly in relation to short-term needs.
Results obtained are dependent on the methodology used, and are inherently subjective and thus open to debate. The results are also dependent on the experts selected and may have limited applicability for issues involving an emerging science.
Risk assessors must evaluate both the quality of the available sources of data for the purpose of the analysis, and the means of characterizing the uncertainty of all the data used. Formalized quality control of raw data and its subsequent treatment is desirable, but also highly dependent on availability and the use to which the data are applied. There is no formalized system for evaluation of data for hazard characterization. Few generalizations can be made, but the means by which data are collected and interpreted needs to be transparent. "Good" data are complete, relevant and valid: complete data are objective; relevant data are case-specific; and validation is context specific.
Complete data includes such things as the source of the data and the related study information, such as sample size, species studied and immune status. Characteristics of relevant data include age of data; region or country of origin; purpose of study; species of microorganism involved; sensitivity, specificity and precision of microbiological methods used; and data collection methods. Observations in a database should be "model free" - i.e. reported without interpretation by a particular model - to allow data to be used in ways that the original investigator might not have considered. This may require access to raw data, which may be difficult to achieve in practice. Using the Internet for such purposes should be encouraged, possibly by creating a Web site with data sets associated with published studies.
Valid data is that which agrees with others in terms of comparable methods and test development. In general, human data need less extrapolation and are preferred to animal data, which in turn are preferable to in vitro data. Data on the pathogen of concern are preferred to data on surrogate organisms, which should only be used on the basis of solid biological evidence, such as common virulence factors.
Currently, the recommended practice is to consider all available data as a potential source of information for hazard characterization. Data that can be eliminated from the risk assessment depends on the purpose and stage of the assessment. In the early stages of risk assessment, small data sets or those with qualitative values may be useful, whereas the later stages of risk assessment may include only those data that have been determined to have high quality standards. Excluding data from the analysis should be based on predefined criteria, and not based solely on statistical criteria. If the analysis is complicated by extreme heterogeneity or by outliers, it is advisable to stratify the data according to characteristics of the affected population, to microbial species, to matrix type or to any other suitable criterion. This practice should provide increased insight rather than information loss.
Sources of data are either peer-reviewed or non-peer-reviewed literature. Although peer-reviewed data are generally preferable for scientific studies, they also have some important drawbacks as inputs for dose-response modelling. First and foremost, they have limited availability. Also, important information may be missing concerning how dose and response data were obtained, as outlined here below. Data presentation in the peer-reviewed literature is usually in an aggregated form, not providing the level of detail necessary for uncertainty analysis. In older papers, the quality control of the measurement process may be poorly documented. For any of these reasons, the analyst might wish to add information from other sources. In that case, the quality of the data should be explicitly reviewed, preferably by independent experts.
An important aspect with regard to dose information is the performance characteristics of the analytical method. Ideally, a measurement reflects with a high degree of accuracy the true number of pathogens in the inoculum. Accuracy is defined as the absence of systematic error (trueness) and of random error (precision). Trueness of a microbiological method is defined by the recovery of target organisms, the inhibitory power against non-target organisms, and the differential characteristics of the method, as expressed in terms of sensitivity and specificity. Precision is related to the nature of the test (plating vs enrichment), the number of colonies counted or the number of positive subcultures, and the dispersion of the inoculum in the test sample (see Havelaar et al., 1993). It is also important to know the variation in ingested dose between individuals, related to the dispersion of the pathogens in the inoculum, but also in relation to different quantities of the inoculum being ingested. These characteristics are of particular relevance when using observational data on naturally occurring infections. A pathogen's infectivity can be affected by both the matrix and the previous history of the pathogen, and this should be taken into account.
With regard to response information, it is important to note whether the outcome was represented as a binary or a continuous outcome. Current dose-response models (see Chapter 6) are applicable to binary outcomes, and this requires that the investigator define the criteria for both positive and negative responses. The criteria used for this differentiation may vary between studies, but should explicitly be taken into account. Another relevant aspect is the characteristics of the exposed population (age, immunocompetence, previous exposure, etc.).
The aspects listed in this section are not primarily intended for differentiating "good" from "bad" data for hazard characterization, but rather to guide the subsequent analysis and the use of the dose-response information in a risk assessment model.