Previous Page Table of Contents Next Page


5. RISK CHARACTERIZATION OF SALMONELLA ENTERITIDIS IN EGGS

5.1 Summary

In risk characterization for S. Enteritidis in eggs, the output of exposure assessment was combined with hazard characterization, and the probability that an egg serving results in human illness was demonstrated. Changes in predictive risk upon changes in the flock prevalence and time-temperature scenarios are investigated. Key uncertainties that might have certain influence on the result are also shown. In addition, effects of risk management options are quantitatively compared and evaluated. It should be noted that the risk assessment of S. Enteritidis in eggs was intentionally conducted so as not to be representative of any specific country or region. The probability of illness and the compared effects of possible management options therefore only reflect the data environment used in this assessment.

5.2 Risk estimation for S. Enteritidis in eggs

5.2.1 Model overview

The general structure of the S. Enteritidis in eggs risk assessment is outlined in Figure 5.1. The exposure assessment model consists of three stages: production; shell egg processing and distribution; and preparation and consumption; and combined with egg products processing if appropriate. This information is combined with the dose-response model from the Salmonella hazard characterization to estimate human illnesses resulting from exposures predicted by the exposure assessment to provide the risk characterization. The parameters used for the beta-Poisson dose-response function were described in Hazard Characterization (Table 3.16 in Section 3.5.2). One simulation of the entire model consists of 30 000 iterations, sufficient to generate reasonably consistent results between simulations.

Figure 5.1. Schematic diagram showing the stages of the risk assessment of Salmonella Enteritidis in eggs

5.2.2 Results

The final output of the shell egg model is the probability that an egg serving results in human illness. This probability is determined as the weighted average of all egg servings (both contaminated and not contaminated) in a population. Clearly, the risk per serving is variable when we consider individual egg servings (e.g. a serving containing 100 organisms is much more likely to result in illness than a serving containing just 1 organism), but the meaningful measure is the population likelihood of illness. This risk per serving can be interpreted as the likelihood of illness given that a person consumes a randomly selected serving.

Three values for flock prevalence (5%, 25% and 50%) were considered. As explained earlier, three scenarios for egg storage time and temperature were also considered (reduced, baseline and elevated). The combination of these uncertain inputs generates nine different outputs from the model.

The lowest risk of illness is predicted when flock prevalence is 5% and storage times and temperatures are reduced (Table 5.1). In this scenario, the calculated risk is 2 illnesses in 10 million servings (0.00002%). The highest risk is predicted when flock prevalence is 50% and storage times and temperatures are elevated. In this case, the calculated risk is 4.5 illnesses in each million servings (0.00045%).

Table 5.1. Predicted probabilities of illness per egg serving based on different flock prevalence settings and different egg storage time and temperature scenarios.

Flock prevalence

Time-temperature scenarios

Reduced

Baseline

Elevated

5%

0.00002%

0.00002%

0.00004%

25%

0.00009%

0.00012%

0.00022%

50%

0.00017%

0.00024%

0.00045%

Changes in risk are approximately proportional to changes in the flock prevalence. For example, 5% flock prevalence is one-fifth of 25%. Correspondingly, the risk of illness for scenarios with 5% flock prevalence is one-fifth that of scenarios with 25% flock prevalence. Similarly, doubling flock prevalence from 25% to 50% also doubles the risk of illness if all other inputs are constant.

Under the baseline conditions using data set for this model, for any constant flock prevalence, the risk decreases by almost 25-30% from the baseline time-temperature scenario to the reduced time-temperature scenario. This risk increases by almost 90% between the baseline and elevated time-temperature scenarios. Although the degree of change in risk would be altered from baseline conditions, these simulations show, for example, that changing storage times and temperatures from farm to table results in disproportionately large effects on risk of illness.

The final output of the egg products model is a distribution of the numbers of S. Enteritidis remaining in 10 000-lb (~4500 litre) containers of liquid whole egg following pasteurization. The S. Enteritidis considered in this output are only those contributed by internally contaminated eggs. This output serves as a proxy for human health risk until the model is extended to consider distribution, storage, preparation - including additional processing - and consumption of egg products. Figure 5.2 shows the output for the 25% flock prevalence, baseline scenario. About 97% of the pasteurized lots are estimated to be S. Enteritidis-free, and the average level is about 200 S. Enteritidis remaining in each lot.

Figure 5.2. Predicted distribution of Salmonella Enteritidis (SE) contributed by internally contaminated eggs remaining in 10 000-lb (~4500 litre) containers of liquid whole egg after pasteurization. This distribution is predicted based on an assumed 25% flock prevalence, and the baseline egg storage times and temperatures in the model. Note that the y-axis is in log10 scale.

5.2.3 Uncertainty

Key uncertainties considered in this analysis relate to within-flock prevalence, frequency of egg contamination from infected hens, frequency of contaminated eggs laid in which the yolk is contaminated, and dose-response parameters.

Within-flock prevalence (FHen_Flock) is a distribution fitted to available data (Table 4.20 and Figure 5.3). Uncertainty regarding the mean of this distribution is estimated by re-sampling from the estimated lognormal distribution with a sample size equivalent to the original data and re-calculating the mean of the simulated data (i.e. bootstrap methods). For simplicity, it was assumed that the standard deviation of this lognormal distribution was constant and equal to 6.96% (Table 4.20). Uncertainty in this curve was calculated by assuming that the uncertainty about the mean was normally distributed. The standard deviation of the mean calculated from 1000 bootstrap replicates was 0.38%. The 5th and 95th confidence bounds are shown in Figure 5.3.

Frequency of egg contamination from infected hens is assumed constant in the model, but its uncertainty is modelled using a beta distribution with inputs from Humphrey et al.(1989). The frequency of yolk-contaminated eggs is constant in the model, but its uncertainty is modelled using a beta distribution reflecting the outcome of Humphrey et al. (1991). Uncertainty regarding the dose-response parameters is modelled as described in the hazard characterization section.

Uncertainty about the probability of illness per serving is shown to increase as the assumed flock prevalence increases (Figure 5.4). For any given flock prevalence, the uncertainty distribution has a constant coefficient of variation (i.e. standard deviation/average). Therefore, as the average probability of illness increases, its uncertainty increases proportionately.

Figure 5.3. Cumulative frequency distributions for within-flock prevalence (FHen_Flock). The curve predicted by available data from infected flocks is shown relative to the best fitting lognormal distribution curve. Upper and lower bound curves are predicted using the 95th and 5th confidence intervals of the mean of the best fitting lognormal distribution.

Figure 5.4. Uncertainty in probability of illness for different flock prevalence inputs assuming the baseline egg storage times and temperatures. Error bars represent the 90% confidence intervals for calculated uncertainty distributions.

Uncertainty not considered in this analysis relates to flock prevalence, predictive microbiology equations, time and temperature of storage, and pathway probabilities. Nevertheless, by changing the input values for flock prevalence, storage time and storage temperature, some evidence is provided regarding the effect of these inputs on risk (i.e. the sensitivity of the predicted risk per serving to these model inputs).

5.2.4 Discussion

The range in risk of illness predicted by this model extends from at least 2 illnesses per 10 million shell egg servings to 45 illnesses per 10 million servings. The scenarios considered represent a diversity of situations that approximate some countries or regions in the world. Nevertheless, no specific country is intentionally reflected in this model’s inputs or outputs.

The effect of different flock prevalence levels on per serving risk is straightforward to calculate from this model. Nevertheless, the impact of changing egg storage times and temperatures is not trivial. These effects must be simulated to estimate the result. The model shows that change of 10% (either increase or decrease) in storage times and temperatures result in greater than a 10% change in the predicted risk per serving.

The uncertainty of probability of illness per serving was proportional to the average probability in each scenario considered. That finding suggests that we should be able to simulate scenarios and directly calculate uncertainty based on the average risk predicted by this model.

5.3 Risk management options for S. Enteritidis in eggs

5.3.1 Estimation of the risk of illness from S. Enteritidis in eggs in the general population at different prevalence and concentration levels of contamination

The model was used to estimate the relative effects of different prevalence and concentration levels of S. Enteritidis in contaminated eggs. Prevalence can either be the proportion of flocks containing one or more infected hens (i.e. flock prevalence) or the proportion of infected hens within infected flocks (i.e. within-flock prevalence). The risk associated with different flock prevalence levels was illustrated in Table 5.1. That analysis illustrated that risk was generally proportional to the flock prevalence level. Reducing the proportion of infected flocks is therefore associated with a proportional decline in the likelihood of illness per serving among the population of all servings. One can also examine the risk of illness per serving for different within-flock prevalence levels, as well as for different starting concentrations of S. Enteritidis per egg.

To model the effect of within-flock prevalence on risk, the 1st, 50th and 99th percentile values of the within-flock prevalence distribution (0.1%, 0.5% and 22.3%, respectively) were simulated (Figure 5.5). The point of this analysis is to isolate the effect of within-flock prevalence on likelihood of illness by considering within-flock prevalence to be non-variant, but examining three different levels. This analysis also provides insight as to the effect of assuming different average within-flock prevalence levels on probability of illness. For these simulations, flock prevalence was assumed to be 25%. In the baseline time-temperature scenario, risk per serving was 6 × 10-8, 3 × 10-7 and 1 × 10-5 for within-flock prevalence levels of 0.1%, 0.5% and 22.3%, respectively. The results show that risk of illness per serving changes in direct proportion to changes in within-flock prevalence. This effect occurs regardless of the time-temperature scenario considered. Consequently, the risk per serving if all infected flocks had within-flock prevalence levels of 10% (i.e. 10 of every 100 hens are infected) is 100 times the risk per serving when the within-flock prevalence is fixed at 0.1% (i.e. 1 in every 1000 hens is infected). In terms of control, these results suggest that reducing the proportion of infected hens in flocks provides a direct means of reducing illnesses from contaminated eggs.

Different initial levels of S. Enteritidis in eggs at the time of lay were modelled by assuming that all contaminated eggs started with 1, 10 or 100 organisms (Figure 5.6). The baseline egg storage time and temperature scenario was assumed, but flock prevalence was varied. For a flock prevalence of 5%, risk per serving was about 2 per 10 million regardless of whether the initial number of S. Enteritidis per egg was 1, 10 or 100. For flock prevalence levels of 25% and 50%, a more detectable change in risk per serving occurs between eggs initially contaminated with 1, 10 or 100 S. Enteritidis. For example, at 25% flock prevalence, the risk per serving increases from 8 per 10 million to 10 per 10 million as the number of S. Enteritidis in eggs at lay increases from 1 to 100. Nevertheless, for one-log changes in the initial numbers of S. Enteritidis, the resulting change in probability of illness is much less than one log.

Figure 5.5. Predicted probability of illness, assuming that within-flock prevalence is either 0.1%, 0.5% or 22.3% (1st, 50th, or 99th percentiles of the lognormal distribution used in the model, respectively). Three egg storage time and temperature scenarios are considered. Flock prevalence is assumed to be 25%.

Figure 5.6. Predicted probability of illness per serving, assuming that the number of Salmonella Enteritidis (SE) per contaminated egg at lay is 1, 10 or 100. Three flock-prevalence levels are considered. Egg storage times and temperatures are assumed to be the baseline settings.

The dose-response function used in this risk characterization predicts that the probability of illness given an average dose of 1, 10 or 100 organisms is 0.2%, 2.2% or 13%, respectively. If all contaminated eggs were consumed raw immediately after lay, one would expect these probabilities to be appropriate to predict illnesses. The production module predicts that contaminated eggs are produced at a frequency of about 5 × 10-5 (~1 in 20 000) when flock prevalence is 25%. If all contaminated eggs contained just one organism, with no growth or decline before consumption, the predicted risk per serving should be 5 × 10-5 × 0.002, or 10-7. Similarly, the risk per serving if all eggs were contaminated with 10 and 100 organisms would be 10-6 and ~7 × 10-6, respectively.

Figure 5.7 compares these predicted risks - when no growth or cooking is assumed - to the predictions shown in Figure 5.6 for 25% flock prevalence. When just a single S. Enteritidis organism is in contaminated eggs, Figure 5.6 implies that allowing growth inside eggs elevates the risk. Yet when contaminated eggs contain 10 or 100 organisms, Figure 5.6 implies that cooking of egg meals substantially reduces the risk. The explanation for these findings is that, regardless of the initial contamination, the combined effect of growth and cooking is to stabilize the risk per serving to nearly one per million; whereas if growth and cooking are not modelled, the risk per serving only depends on a dose-response function that is increasing at an increasing rate across the dose range considered. Therefore, it can be concluded from Figures 5.5 and 5.6 that the model’s output is relatively less sensitive to initial numbers of S. Enteritidis than other inputs that influence growth and cooking.

Figure 5.7. Predicted risk of illness when the exposure assessment model includes effects of growth and cooking compared with cases when no growth or cooking is modelled, for situations where the initial number of S. Enteritidis in contaminated eggs at lay is 1, 10 or 100. Flock prevalence is assumed to be 25% and baseline egg storage times and temperatures are assumed when growth and cooking are modelled.

5.3.2 Estimation of the change in risk likely to occur from reducing the prevalence of infected flocks and destroying breeding or laying flocks, and estimation of the change in risk likely to occur from reducing the prevalence of S. Enteritidis-positive eggs through testing of flocks and diversion of their eggs to pasteurization, and including the effect of pasteurization

As shown previously, risk of illness per serving decreases as the percent of infected flocks (i.e. flock prevalence) decreases. Table 5.2 illustrates the influence of flock prevalence on risk of illness per serving. Because the model includes uncertain inputs, risk per serving is also uncertain and this table summarizes uncertainty as the mean, 5th, and 95th percentile values (rounded to the nearest significant digit) of the predicted distribution.

Table 5.2. Predicted uncertainty in risk of illness per egg serving for different flock prevalence levels.

Flock prevalence

Mean

5th

95th

0.01%

0.00000005%

0.00000002%

0.00000009%

0.10%

0.0000005%

0.0000002%

0.0000009%

5.00%

0.00002%

0.00001%

0.00004%

25.00%

0.0001%

0.0001%

0.0002%

50.00%

0.0002%

0.0001%

0.0005%

We can use the results in Table 5.2 to predict the reduction in risk for a country or region that decides to control infected flocks. For example, consider a country with 5% of its flocks containing one or more infected hens. If such a country were to institute a programme with 98% effectiveness in reducing flock prevalence, then successful implementation of the programme would result in a flock prevalence of about 0.1%. The model predicts, in this case, that the mean risk of illness per egg serving would decrease from 200 per thousand million to 5 per thousand million. Pre-harvest interventions, such as those used in Sweden and other countries, might result in flock prevalence levels of 0.1% or lower.

Although the model predicts that probability of illness per serving is proportional to flock prevalence, the question remains: how can we reduce prevalence of infected flocks? To accomplish this seemingly requires either preventing uninfected flocks from becoming infected, or treating infected flocks to render them uninfected.

Treatment of breeding flocks to render them uninfected has been used in The Netherlands (Edel, 1994). Antibiotic treatment of the flock followed by competitive exclusion culture administration might succeed in eliminating the organism from infected hens, but environmental reservoirs may still exist to re-infect hens once the effects of the antibiotic have worn off. Furthermore, application of this method to commercial flocks may not be feasible or economic.

Preventing uninfected flocks from becoming infected is where most attention is focused in control programmes. Uninfected flocks can become infected via vertical transmission (i.e. infected eggs before hatch result in exposure of a cohort via horizontal transmission following hatching), via feed contamination, or via environmental sources (i.e. carryover infection from previously infected flocks). Control programmes may attempt to eliminate these avenues of exposure by applying one or more actions.

1. Test breeding flocks to detect S. Enteritidis infection, followed by destruction of the flock, if infected, to prevent it from infecting commercial flocks consisting of its future offspring.

2. Require heat treatment of feed before its sale (thereby eliminating S. Enteritidis and other pathogens).

3. Following depopulation of an infected flock, intense cleaning and disinfecting of poultry environments known to be contaminated. Such an approach must also eliminate potential reservoirs (e.g. rodents).

Most control programmes use all three interventions to preclude S. Enteritidis-infected flocks. The control programme in Sweden consists of such an approach (Engvall and Anderson, 1999). The Pennsylvania Egg Quality Assurance Program in the United States of America also used such an approach (Schlosser et al., 1999). However, discerning the efficacy of each intervention is difficult. Ideally, one would like to know what percent of newly infected flocks result from vertical transmission, feed contamination or previously contaminated environments.

Giessen, Ament and Notermans (1994) present a model for determining the relative contribution of risk of infection from vertical, feed-borne (or other outside environmental sources) and carryover environmental contamination. Comparing the model with data collected in The Netherlands, it appears that carryover infection was the dominant contributor to infection risk. Such a conclusion is based on the shape of a cumulative frequency curve for flock infection, which suggests that most flocks are infected soon after placement in commercial facilities. There is also evidence that the prevalence of infected breeder flocks is very low in The Netherlands.

Data from the United States of America Salmonella Enteritidis Pilot Project (Schlosser et al., 1999) suggest a fairly constant prevalence of positive samples collected in flocks by age, and that infection did not necessarily increase over time. Nevertheless, these data do not describe the age when infection was introduced. Roughly, 60% of the poultry flocks tested in this project were S. Enteritidis-positive. Additional evidence presented shows that 6 of 79 pullet flocks (8%) tested were S. Enteritidis-positive. These data suggest that the risk of infection from vertical transmission might be about 8%. Furthermore, there is some suspicion that feed contamination is an important source of S. Enteritidis for United States of America poultry flocks.

The data from The Netherlands and the United States of America suggest that the carryover route may account for >80% of the risk of flock infection in countries where S. Enteritidis is endemic. If true, then complete control of breeder flocks might only be expected to achieve £20% reduction in the prevalence of S. Enteritidis-infected flocks in such countries.

Results of an aggressive monitoring programme for breeder flocks in The Netherlands between 1989 and 1992 have been reported (Edel, 1994). For egg-sector breeding flocks, there is some suggestion that prevalence of infected flocks was reduced by about 50% per year. Effectiveness was less dramatic for meat-sector breeding flocks. This programme involved regular faecal testing of all breeder flocks, as well as regular testing of hatchery samples from day-old chicks. Positive flocks were depopulated until mid-1992, when treatment with enrofloxacin and a competitive exclusion culture was allowed as an alternative to the expense of prematurely depopulating a breeding flock. If a programme with 50% effectiveness in reducing prevalence of infected flocks each year were implemented for 3 years, one might predict that prevalence would fall to 12% (0.53) of the prevalence at programme start.

To reduce the risk of carryover infection for commercial flocks, it is thought that aggressive cleaning and disinfection must be completed after an infected flock is depopulated and before another flock is placed to begin a new production cycle. Cleaning and disinfection must also include an effective long-term rodent-control programme. Analysis of efforts in Pennsylvania to reduce the prevalence of infected commercial flocks suggests a decline from 38% to 13% during three years of programme operation (White et al., 1997). This programme routinely screened flocks for evidence of S. Enteritidis and required thorough cleaning, disinfection and rodent control once positive flocks had been depopulated. Another study in Pennsylvania (Schlosser et al., 1999) found 16 of 34 (47%) poultry environments that were initially S. Enteritidis-positive were negative for the pathogen following cleaning and disinfection.

Risk characterization of test and diversion programmes depends on the specific testing used in commercial flocks. For example, the Swedish programme collected three pooled samples, each consisting of 30 faecal droppings, during two or more examinations of egg production flocks during each production cycle (Engvall and Anderson, 1999). In The Netherlands, the breeder-flock monitoring programme testing protocol required the collection of 2 pools of 50 caecal droppings each every 4 to 9 weeks of production (Edel, 1994). The Salmonella Enteritidis Pilot Project’s protocol required collection of swabs from each manure bank and egg belt in a hen house on three occasions in each production cycle (Schlosser et al., 1999).

Regardless of the size or type of sample collected, it would seem that a testing protocol that examines commercial flocks frequently and diverts eggs soon after detection should result in a meaningful reduction in the contaminated shell eggs marketed each year.

To examine the effect of test and diversion with the present model, two protocols were assumed, with either one or three tests administered to the entire population of egg production flocks. The single test would be administered at the beginning of egg production. Under the three-test regime, testing at the beginning of egg production would be followed by a second test four months later, and the third administered just before the flock is depopulated. Each single test consists of 90 faecal samples randomly collected from each flock. A flock is considered positive if one or more samples contained S. Enteritidis.

For the within-flock prevalence distribution used in this model, a single test of 90 faecal samples was likely to detect 44% of infected flocks. This was calculated using a discrete approximation to Equation 5.1, where a summation replaces the integral and discrete values of p, the within-flock prevalence. This equation assumes that an infected hen sheds sufficient S. Enteritidis in her faeces to be detected using standard laboratory methods.

Equation 5.1

If a flock was found positive on a test, its entire egg production was diverted to pasteurization. It was assumed that the egg products industry normally uses 30% of all egg production (consistent with the United States of America industry). Therefore eggs going to breaker plants from flocks other than those mandatorily diverted were adjusted to maintain an overall frequency of 30% (i.e. the percentage of eggs sent to breaker plants from test-negative infected flocks, and non-infected flocks, was reduced proportionally).

Test-positive flocks’ premises were assumed to be cleaned and disinfected following flock depopulation. The effectiveness of cleaning and disinfection in preventing re-infection of the subsequent flock was assumed to be 50%. Furthermore, it was assumed that carryover infection was responsible for flocks becoming infected. Consequently, houses that were not effectively cleaned and disinfected resulted in infected flocks when they were repopulated.

Assuming a starting prevalence of 25% and the baseline egg storage time and temperature scenario, the effectiveness of the two testing protocols was estimated over a four-year period. Probability of illness per shell egg serving in each year was calculated for each protocol (Figure 5.8). Testing three times per year for four years reduced the risk of human illness from shell eggs by more than 90% (i.e. >1 log). Testing once a year for four years reduced risk by over 70%. At the end of the fourth year, the flock prevalences for the one-test and three-test protocols were 7% and 2%, respectively. Therefore, assuming the cost of testing three times per year to be three times greater than the cost of testing once a year (ignoring producer costs or market effects from diversion of eggs), then the flock prevalence results suggest a roughly proportional difference (i.e. 7%/2% » 3) in the protocols. However, the reduction in risk per serving of the one-test protocol is greater than one-third of the three-test protocol. In other words, the one-test protocol achieves a 70% reduction while a testing protocol that is three times more costly achieves a 90% reduction (i.e. a 20% improvement). Such a result is not surprising when we consider that the single (or first) test at the beginning of the year most substantially affects risk. This is because flocks detected on the first test have their eggs diverted for the entire year, while flocks detected on a second test have their eggs diverted for just over half the year. Furthermore, flocks detected on the third test are tested so late in production that diversion of their eggs does not influence the population risk at all.

While egg diversion from positive flocks reduces the public health risk from shell eggs, it might be expected that there is some increased risk from egg products. Mandatory diversion causes more contaminated eggs to be sent to pasteurization. Nevertheless, the average quality of contaminated eggs is improved by diversion in this model.

It was assumed in the model that all diverted eggs were nest run (i.e. stored usually less than 2 days). Without mandatory diversion, 97% of lots were S. Enteritidis-free post-pasteurization and the average number of surviving S. Enteritidis in a 10 000-lb (~4500 litre) bulk tank was 200 (assuming 25% flock prevalence and the baseline egg storage times and temperatures). If a single test is used to determine which flocks are diverted, there are still 97% of vats that are S. Enteritidis-free and they average 140 S. Enteritidis per lot. The decrease in the average number of S. Enteritidis per lot is due to the increased proportion of nest run eggs that are diverted. Nest run eggs are stored for a shorter period and consequently contribute fewer organisms. If two tests are used, then there are 97% of vats that are S. Enteritidis free, and the average is 130 per lot. If three tests are used, there is no additional effect on egg products beyond the second test because the third test occurs just as the flock is going out of production.

Although not a direct measure of public health risk, these results suggest that the risk from egg products decreases as flocks are detected and diverted. However, this effect is conditional on nest run eggs being substantially less contaminated than restricted or graded eggs. Alternative scenarios might result in some increase in risk from diversion.

Figure 5.8. Predicted probability of illness per serving from shell eggs per year after implementing two testing protocols. It is assumed that all flocks in the region are tested each time and that initial flock prevalence is 25%. Baseline egg storage times and temperatures are used for the four years.

5.3.3 Estimation of the change in risk likely to occur from the use of competitive exclusion or vaccinating flocks against S. Enteritidis

The effects of competitive exclusion (CE) treatment are difficult to quantify from field evidence. Sweden and The Netherlands are examples of countries that include the use of CE in their S. Enteritidis control programmes. Nevertheless, such treatment is only one component of these programmes and its effect is not clearly separable from other components. CE has been studied in experimental settings for newly hatched chicks. The intent of CE inoculation in chicks is to quickly establish an indigenous intestinal flora to resist S. Enteritidis colonization. Efficacy of preventing infection appears to depend on the CE culture used, timing of exposure, dose of exposure, and possibly the addition of lactose (Corrier and Nisbet, 1999). Field evidence of CE efficacy in mature hens comes from the United Kingdom and from The Netherlands. In both countries, antibiotic treatment was applied to flocks known to be infected and the hens were subsequently inoculated with CE cultures. The intent of CE inoculation for hens was to quickly restore intestinal flora - that had been destroyed by the antibiotic treatment - to assist the hens in resisting future S. Enteritidis exposures. In the UK, 20 of 22 trials that combined antibiotic and CE treatments succeeded in preventing re-infection of flocks for a 3-month study period (Corrier and Nisbet, 1999). Infection status was determined from cloacal swab samples in treated flocks. In The Netherlands, combining antibiotic and CE treatments resulted in preventing 72% (n = 32) of flocks becoming re-infected. Two such combined treatments protected 93% of flocks from re-infection.

Vaccination for S. Enteritidis has been examined extensively in experimental settings, but less so in field trials. Experimentally, several types of vaccines have been evaluated: killed bacterins of various strains, live bacterins of attenuated strains, and surface antigen extracts of various strains. Injected killed bacterins are thought to have limited efficacy in preventing intestinal colonization of hens with S. Enteritidis, although such bacterins may, through stimulation of humoral antibody, reduce internal organ (including ovary) infection. Live bacterins - or surface antigen vaccines - may be more effective at modulating intestinal colonization by S. Enteritidis because these products may elicit the cell-mediated immune response needed to resist colonization. Nevertheless, most commercially available vaccines are currently of the killed variety.

Evidence concerning the effectiveness of S. Enteritidis bacterins in controlling infection has been reported for some Pennsylvania flocks (Schlosser et al., 1999). A total of 19 flocks from two farms used a bacterin to control their S. Enteritidis infection and sampling results were compared with 51 flocks that did not use a bacterin. Only a slight difference was noted in environmentally-positive samples collected in vaccinated (12%) and unvaccinated (16%) flocks. Yet, the overall prevalence of S. Enteritidis-positive eggs was 0.37 per 10 000 in vaccinated flocks and 1.5 per 10 000 in unvaccinated flocks. These results support the hypothesis that bacterins may not influence risk of colonization, but may reduce systemic invasion of S. Enteritidis, with resultant egg contamination. Nevertheless, this analysis did not control for confounding factors (e.g. rodent control, adequacy of cleaning and disinfection) that may have influenced the differences between vaccinated and unvaccinated flocks.

To evaluate the effect of vaccination against S. Enteritidis using the present model, it was assumed that flocks would need to be tested to determine their status prior to use of a vaccine. A single test or two tests four months apart, with 90 faecal samples per test, were assumed. The vaccine was assumed to be capable of reducing the frequency of contaminated eggs by approximately 75% (e.g. 0.37 per 10 000 for vaccinated flocks ¸ 1.5 per 10 000 for non-vaccinated flocks).

Assuming 25% flock prevalence and the baseline egg storage time and temperature scenario, the probability of illness per serving for a single test and vaccination protocol is about 70% of a non-vaccination protocol (Figure 5.9). Risk is reduced to 60% of the non-vaccination protocol if two tests are applied.

Figure 5.9. Comparison of predicted probability of illness per serving between three scenarios: when no vaccination is used; when one test is applied at the beginning of production and positive flocks are all vaccinated; and when a second test is applied four months after the first test and additional test-positive flocks are vaccinated. Flock prevalence is assumed to be 25%, and the baseline egg storage time and temperature scenario is used.

Given the efficacy of bacterin use implied by the field evidence, one can assume that universal vaccination might reduce baseline risk to 25% of the risk resulting from a non-vaccinated population. However, the cost of vaccinating the entire population of laying hens could be high. The scenarios considered here assume that before a flock is vaccinated some testing is done to determine if that flock is infected. Nevertheless, the cost of testing all flocks must be weighed against the cost of vaccination. Also, more field research concerning the true efficacy of vaccination should be conducted before the cost of vaccination is borne by more than a few producers (i.e. if costs are to be paid by the public or shared across the entire industry).

5.3.4 Estimation of the change in risk likely to occur from minimizing the number of S. Enteritidis organisms in eggs through refrigeration of eggs after lay and during distribution, or requiring a specific shelf life for eggs stored at ambient temperatures

Interventions intended to minimize the dose of S. Enteritidis in contaminated eggs focus on preventing any growth of the pathogen after the egg is laid. Most evidence suggests that naturally contaminated eggs contain very few S. Enteritidis organisms at lay. If eggs are consumed soon after lay, or if eggs are kept refrigerated during storage, then the number of S. Enteritidis remains relatively unchanged prior to preparation of egg-containing meals.

Available predictive microbiology models suggest that eggs stored at 10°C will not grow S. Enteritidis for an average of 46 days. If most eggs are stored at <10°C and are consumed within 25 days, then interventions intended to improve egg handling will only influence the fraction of eggs that are time and temperature abused.

The effect of mandatory retail storage times and temperatures were evaluated using slightly different baseline assumptions (Table 5.3). These hypothetical settings might be typical in a country that does not have egg refrigeration requirements. The effects of time and temperature restrictions were evaluated assuming a flock prevalence of 25%.

Table 5.3. Hypothetical baseline input distributions for egg storage time and temperatures, assuming no egg storage requirements.

Inputs

Distributions

Storage temperature before transportation (°C)

=RiskPert(0,14,35)

Storage time before transportation (hours)

=RiskUniform(0,3)*24

Storage temperature after processing (°C)

=RiskPert(5,14,30)

Storage time after processing (hours)

=RiskUniform(1,5)*24

Retail storage temperature (°C)

=RiskPert(0,14,35)

Retail storage time (hours)

=RiskPert(1,9.5,21)*24

NOTES: PERT distribution has parameters RiskPert(minimum, most likely, maximum). Uniform distribution has parameters RiskUniform(minimum, maximum).

Figure 5.10. Probability of illness per serving of shell eggs given mandatory shelf lives of <14 or <7 days at retail, or mandatory retail storage temperature <7.7°C. Egg storage times and temperatures are modelled as for the baseline scenario, except for changes noted in Table 5.3. These changes to baseline egg storage times and temperatures were made to represent a country or region that does not routinely refrigerate eggs. Flock prevalence was assumed to be 25%.

Truncating retail storage time to a maximum of either 14 days or 7 days simulated a shelf-life restriction scenario. Truncating the retail storage temperature to less than 7.7°C simulated a refrigeration requirement. The results are summarized in Figure 5.10.

Restricting shelf life to less than 14 days reduced the predicted risk of illness per serving by a negligible amount (~1%). However, keeping retail storage temperature at no more than 7.7°C reduced risk of illness per serving by about 60%. If the shelf life was reduced to 7 days, risk per serving was also reduced by about 60%.

5.4 Discussion

This model was purposely configured and parameterized to not reflect any specific country or region, although its results might be indicative of many country situations. A generic risk assessment such as this one provides a starting point for countries that have not developed their own risk assessment. It can serve to identify the data needed to conduct a country-specific risk assessment, as well as to provoke thinking concerning policy development and analysis.

Control of prevalence - either the proportion of flocks infected or the proportion of infected hens within flocks - has a direct effect in reducing probability of illness per serving. On the whole, egg storage times and temperatures can disproportionately influence the risk of illness per serving. Numbers of organisms initially in eggs at the time of lay seems less important.

Testing flocks, combined with diversion of eggs from positive flocks, is predicted to reduce public health risk substantially. In the scenarios considered here, diversion of eggs from test-positive flocks also reduced the apparent risk from egg products. Vaccination may reduce risk of illness by about 75%, but is typically less effective because producers would only vaccinate test-positive flocks.

As discussed in the Exposure Assessment for S. Enteritidis in Eggs (Chapter 4), biological inputs may be constant between models for different countries or regions, yet little else is likely to be similar. The predictive microbiological inputs, the distribution of within-flock prevalence, and the frequency at which infected hens lay contaminated eggs are examples of biological inputs that might be constant from one country to another (although not necessarily). The effects of uncertainty regarding these biological inputs to the model have been examined. Nevertheless, there are many aspects of uncertainty not fully considered (e.g. alternative statistical distributions were not evaluated for the predictive microbiology equations or within-flock prevalence distributions). Furthermore, many of the inputs are both highly uncertain and variable between countries. For example, times and temperatures of egg storage may vary considerably within and between countries, but it is difficult for any country to precisely know its distributions for storage times and temperatures.

This model introduces two new concepts not included in previous exposure assessments for S. Enteritidis in eggs. First, it considers the possibility of eggs being laid with S. Enteritidis already inside the yolk. Such eggs defy previous model descriptions of the time and temperature dependence of S. Enteritidis growth in eggs. Although predicted to be uncommon, yolk-contaminated eggs can support rapid growth of S. Enteritidis in much shorter times than eggs contaminated in the albumen.

Second, this model considers the role of S. Enteritidis growth in eggs destined for egg products. While most eggs are modelled as being shipped very quickly to egg products plants (i.e. nest run eggs), some eggs can experience moderate or high levels of growth before being broken and pasteurized.

Many of the results generated by this model are contingent on epidemiological assumptions:

These may be reasonable default assumptions, but more research is needed to determine their appropriateness. Changing these assumptions could generate results that differ from the model, and the model can be adapted to consider such changes.

5.5 References cited in Chapter 5

Corrier, D.E., & Nisbet, D.J. 1999. Competitive exclusion in the control of Salmonella enterica serovar Enteritidis infection in laying poultry. In: A.M. Saeed, R.K. Gast and M.E. Potter (eds). Salmonella enterica serovar enteritidis in humans and animals: Epidemiology, pathogenesis, and control. Ames, Iowa IA: Iowa State University Press.

Edel, W. 1994. Salmonella Enteritidis eradication programme in poultry breeder flocks in The Netherlands. International Journal of Food Microbiology, 21: 171-178.

Engvall, A., & Anderson, Y. 1999. Control of Salmonella enterica serovar Enteritidis in Sweden. In: A.M. Saeed, R.K. Gast and M.E. Potter (eds). Salmonella enterica serovar enteritidis in humans and animals: Epidemiology, pathogenesis, and control. Ames, Iowa IA: Iowa State University Press.

Giessen, A.W. van de, Ament, A.J.H.A., & Notermans, S.H.W. 1994. Intervention strategies for Salmonella Enteritidis in poultry flocks: a basic approach. International Journal of Food Microbiology, 21(1-2): 145-154.

Humphrey, T.J., Baskerville, A., Mawer, S., Rowe, B., & Hopper, S. 1989. Salmonella Enteritidis phage type 4 from the contents of intact eggs: a study involving naturally infected hens. Epidemiology and Infection, 103: 415-423.

Humphrey, T.J. et al. 1991. Numbers of Salmonella Enteritidis in the contents of naturally contaminated hens’ eggs. Epidemiology and Infection, 106: 489-496.

Schlosser, W.D., Henzler, D.J., Mason, J., Kradel, D., Shipman, L., Trock, S., Hurd, S.H., Hogue, A.T., Sischo, W., & Ebel, E.D. 1999. The Salmonella enterica serovar Enteritidis Pilot Project. p.353-365, In: A.M. Saeed, R.K. Gast and M.E. Potter (eds). Salmonella enterica serovar enteritidis in humans and animals: Epidemiology, pathogenesis, and control. Ames, Iowa IA: Iowa State University Press.

White, P.L., Schlosser, W., Benson, C.E., Madox, C., & Hogue, A. 1997. Environmental survey by manure drag sampling for Salmonella Enteritidis in chicken layer houses. Journal of Food Protection, 60: 1189-1193.


Previous Page Top of Page Next Page