Jump to main content.


March 1998 SAP Meeting Comments

A Set of Scientific Issues Being Considered by the Agency in Connection with Policy for Review of Monte Carlo Analyses for Dietary and Residential Exposure Scenarios.

The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Scientific Advisory Panel (SAP) has completed its review of the set of scientific issues being considered by the Agency in connection with Policy for Review of Monte Carlo Analyses for Dietary and Residential Exposure Scenarios.

The review was conducted in an open meeting held in Arlington, Virginia, on March 24, 1998. The meeting was chaired by Dr. Ernest E. McConnell (ToxPath, Inc.).

Other Panel Members present were: Dr. Julian Andelman (University of Pittsburgh), Dr. Charles Capen (The Ohio State University), Dr. Janice Chambers (Mississippi State University), Dr. Amira Eldefrawi (University of Maryland), Dr. Dale Hattis (Clark University), Dr. Ernest Hodgson (North Carolina State University), Dr. Bruce Hope (Oregon Department of Environmental Quality), Dr. Ronald Kendall (Texas Tech University), Dr. Michele Medinsky (Chemical Industry Institute of Toxicology), Dr. Charles Menzie (Menzie-Cura and Associates), Dr. Robert Moore (University of Wisconsin), Dr. Herbert Needleman (University of Pittsburgh), Dr. B.K. Nelson (NIOSH), Dr. Christopher Portier (NIEHS), Dr. Howard Rockett (University of Pittsburgh), Dr. Lawrence Sirinek (Ohio Environmental Protection Agency), Dr. Mary Anna Thrall (Colorado State University), and Dr. John Wargo (Yale University).

Public Notice of the meeting was published in the Federal Register on February 11, 1998.

Policy for Review of Monte Carlo Analyses for Dietary and Residential Exposure Scenarios

Oral statements were received from the following:
Dr. Charles Benbrook, Consumers Union.
Ms. Shelley Davis, Farm Worker Justice Fund.
Dr. Robert Sielkin, Sielkin, Inc.
Dr. Thomas Starr, Environ Corporation.
Dr. Robert Tomerlin and Dr. Leila Barraj, Novigen Sciences Inc. International.
Dr. David Wallinga, Natural Resources Defense Council.

Written statements were received from the following:
California Environmental Protection Agency.
Consumers Union.
Environ Corporation.

General Comments from SAP Members

There were several overall points raised during the Agency presentation, public comment, Panel general discussion, and response to Agency questions, which the Panel wished to add as overarching issues concerning this session. These points are summarized as follows:

  1. The Panel concluded that it is appropriate for the Agency to move toward probabilistic techniques for toxicity endpoints; Agency policy concerning probabilistic methods does not prohibit or exclude the possibility of applying distributions to toxicity data. The Panel recognizes that limited work has been conducted in this area and a generally accepted methodology does not exist. The Panel suggests that the Agency consider revising its policy to encourage use of probabilistic analysis of toxicity endpoints on a case-by-case or substance-by-substance basis.
  2. Whether they are explicitly recognized or not, variability and uncertainty in toxicity estimates are key contributors to variability and uncertainty in resulting risk estimates. In the face of the large uncertainty, the Panel concluded that the Agency has adopted a risk assessment policy that is intended to be protective of human health. The issue cannot be represented as the spread among alternative dose-response models as has sometimes been suggested. Clearly, more research is required to develop the data necessary for probabilistic analysis of variability and uncertainty.
  3. The Panel differed on whether setting criteria at the 99.9th percentile is a conservative approach. However, if the 99.9th percentile is utilized, a percentage of the population (e.g., 23,000 children) will still be exposed to acute effects.

    Setting criteria based on the 99.9th percentile of an exposure distribution is not necessarily conservative for acute effects, depending on how much of a margin of safety is incorporated into the toxicological portion of the risk evaluation (even whole-day acute exposures have the opportunity to be repeated 365 times a year by the same number of people who consume a particular regulated food item with the regulated residue). The 99.9th percentile can appear to be very protective, but in order to identify the level of risk, a combined distributional analysis of the variability in both exposure and human thresholds for the toxic effects under consideration is needed.

    Although setting criteria at the 99.9th percentile might be considered a conservative approach, issues of model robustness, bias, and low precision in even moderate size samples usually results in inadequate estimates. It may be possible to have criterion which result in a conservative approach but which do not have the problems usually associated with estimating extreme tails (e.g, a 95% criteria applied to selected subpopulations). By recognizing and separately modeling subpopulations, it may be possible to choose a lower, less statistically tenuous, percentile at which to make regulatory decisions. A lower percentile may also be warranted if the risk assessment contains a number of "conservative" assumptions that might result in overestimates of risk even at the 99.9th percentile.

  4. The method used to address possible residues below the limit of quantification must be clearly delineated in all simulation studies. The sensitivity of the conclusions to different analytical methods of addressing this problem should be evaluated.
  5. The Panel commented that if subpopulations can be identified a priori, they should be modeled separately and not combined with other subpopulations or the overall population. If potential subpopulations are identified in the course of modeling an overall population, they should be segregated from that overall population model at that time.
  6. The Panel discussed the apparent dichotomy between human health and ecological risk assessment. There is a dichotomy but there are also good reasons for it. In the case of ecological risk assessment, interest is usually to populations or sub-populations as opposed to individuals. This allows a probabilistic evaluation of conditions in a way that is not possible for human health risk assessment. Further, toxicity data can be collected for the species of concern or closely related species (sacrifice of the test organism or impairing the organism s reproductive ability). Such hazard assessment is more challenging for human hazard assessment. Thus, better information can be obtained on dose-response at levels that are ecologically relevant.
  7. The Panel discussed differences in empirical distribution functions (EDF) and theoretical probability distribution functions (PDF). Many of the advantages of EDF are presented in Appendix C of the Agency s background document, but based on Panel discussion, the Agency should provide guidance on which procedure they prefer. The majority of the Panel indicated preference for the EDF approach and concluded that it should be tied to the 99.9th percentile of the exposure distribution. However, it is not clear that this point on the distribution is an extrapolation. As an example, assume that 100 samples describing diet in children and 100 samples describing contaminant levels in these foods are provided. If these samples are truly independent, the cross product of all pairs represents a sample of 10,000 points on the combined exposure distribution. This is not the same as 10,000 samples from the exposure distribution, but it is also not the same as 200 samples in the exposure distribution. The independence assumption gains some improvement over the use of 200 samples. 10,000 samples would be enough to obtain a rough estimate of the 99.9th percentile point. Products of many PDF's can be derived and used directly rather than through random sampling on a computer. Enumeration of combinations of EDF's should be used rather than random sampling when the numbers of combined samples is in the range of normal computing ability (i.e., less than 107 combinations). Both of these techniques will avoid random noise in estimating the final distribution due to use of random samples. Where possible, the Agency should encourage applicants to use analytical statistical methods.

    Mechanistic information about the processes that generate distributions which are embodied in particular parametric forms should be considered as an alternative to EDF. EDF s, used strictly, cannot yield unusual values that are rarer than the size of the data set and may be inadequately protective in populations of hundreds of millions of people consuming foods upwards of a thousand times per year. EDF s may make risk analysts reluctant to make needed adjustments to the data (e.g. to subtract out variance attributable to measurement errors). However, this can be performed technically for EDF s in similar ways as is possible for PDF's.

  8. The use of the terms deterministic and stochastic are somewhat misleading in the background documents and the Agency s presentation; even deterministic models have a degree of stochastic variance. The Panel recommends that the Agency alter the definitions to reflect that "deterministic" involves the application of single descriptors of individual random variables to develop an exposure or potency, whereas "stochastic" implies the use of more complex descriptions of each random variable (e.g, entire distributions) which are combined to produce distributions for the exposure and/or potency characterizations.

    In several places, the Agency repeatedly refers to the use of distributions which have adequate statistical support. Support for a statistical distribution is a technical term in statistics with a definition which does not reflect the use implied by the Agency. The Agency should either alter this terminology to reflect the degree of characterization of the distributions involved or provide a definition for "statistical support" when it is first used.

    Definitions for the terms PDF and range need to be corrected in the glossary of the background document. For continuous random variables, the PDF at a point multiplied by the width of a very small interval containing the point approximates the probability that the random variable falls within some very small interval (the PDF itself cannot express the probability since it often is greater than one). Range has two meanings and should be considered as an interval (min xi, max xi).

Questions to the Scientific Advisory Panel

The Agency poses the following questions to the SAP regarding Policy for Review of Monte Carlo Analyses for Dietary and Residential Exposure Scenarios.

1) Issue: Typical versus maximum use patterns

Questions: Does the Panel agree with the Agency s position to allow exposure assessments to include data reflecting typical application parameters? Are the conditions for accepting residue data based on typical parameters appropriate, or should they be modified?

The majority of the Panel agreed with the Agency to allow use of "typical" use patterns (subject to the criteria stated) so as to permit more site-specific and reasonable acute risk assessments. Exposure analyses will be improved by using "real" data where they can be supported. Limiting assessments to "maximum" conditions alone would promote the practice of making risk management decisions only on the basis of "worst-case" analyses. It is certainly desirable to allow real observations to be utilized where possible to better represent real distributions of likely exposures. The basic data quality conditions for accepting residue data for consideration appear to be a reasonable starting point for judging whether data are good enough to warrant more sophisticated analysis.

While a number of restrictions have been proposed that could insure that residue data from reduced pesticide applications could be based upon more typical application conditions, it seems that there still exists the option to apply a given pesticide at the maximum rate. Under certain conditions (e.g. emergence of pesticide resistance or unusual pest pressure), the maximum rate may be applied, such that the risk to a consumer population, particularly within a given region of the country, may be significantly underestimated. For acute risk assessment, the Agency needs to determine how often the maximum application rate might be used. If it is used with some frequency, then the maximum application rate should be used for acute assessment, or at least with the typical rate for a range of exposure determinations. If there are conditions that truly overestimate the exposure to the consuming population, than an alteration of the labeling conditions as described at the end of Section 4.5 of the Agency s background document seems warranted.

If use data do not meet a particular quality standard they are rejected, if they do meet that standard they are presumably to be used as an accurate and unbiased window of the distributional reality without further adjustment based on other considerations. The most important general threat posed by the use of a particular set of observations to represent a real variability distribution is that it may turn out that the data are in some sense unrepresentative of the real future conditions of use, food preparation, etc. Conformance with label directions is not the only variable that is likely to affect residue levels or residue level distributions. Temperature, humidity, crop varietal types, and other agronomic practices might well change between the time of testing and the times of future application(s).

The Panel s advice is that where possible, data sets should be stratified by predictor variables and distributions adjusted to represent reasonable alternative views of likely future use conditions. This would allow at least some prudent sensitivity analyses for the effects of these parameters on likely future residue distributions and risks. Acceptable data are a starting point for creative risk analysis designed to develop a clearer picture of the effects of factors that affect the risk, and the likely consequences of alternative policies for control of the risk.

Utilizing the highest observed value for tentative risk calculations is not a substitute for an appropriate estimate of the toxicologically relevant variation because the highest observation is necessarily limited by the particular type of food. Sometimes, the highest measurement for a particular pesticide in a percentile of a distribution (as where only about five samples have been taken), could lead to a very substantial underestimate of a hazard that could occur many thousands of times per year to individual people in a large population consuming a food many times each over the course of a year. On the other hand, routine use of the highest residue value, in place of a distribution derived from appropriate data, also runs the risk of substantially overstating the true chronic hazards from pesticide exposures that tend to be averaged out over periods of weeks or months. A quantitative risk analysis procedure must involve combining information about the months, in addition to the toxicodynamics for particular endpoints with exposure variability information tied to toxicologically defined averaging times.

The Panel discussed consideration of pesticide misuse for risk assessment purposes. While the majority of the Panel considered assumptions for the misuse of pesticides unwise, a Panel Member suggested that it should be included only as a separate worst-case scenario, not a part of the main analysis.

2) Issue: Probabilistic assessment of acute but not chronic dietary risk

Question: Does the Panel agree with the Agency s position that currently available consumption data do not permit probabilistic assessment of chronic dietary risk ? If the Panel does not agree with the Agency, can the Panel suggest an appropriate process for using the available consumption data to permit probabilistic assessment of chronic dietary risk?

The Panel agreed that consumption data can be used for probabilistic assessment of chronic risk, under specific conditions and guidance, as noted below. In addition, the Panel concluded that the issue of how to develop an appropriate probabilistic assessment for chronic exposure should be explored in more depth by the Agency. The Panel recommends that before the Agency undertake a probabilistic analysis of chronic exposure, consideration be given to whether this will improve risk assessments. The "tiered" approach (i.e., beginning with deterministic and proceeding to more sophisticated probabilistic methods) is consistent with Agency policy. The Panel encourages the Agency to meet with the SAP in the future as it refines its policy for review of Monte Carlo analyses.

The difficulty of utilizing consumption data for probabilistic assessment of chronic risk is modeling human food consumption behavior (or any individual human behavior) over the long-term. This is a challenging task but not one that should be avoided. The National Food Consumption Survey and the Continuing Survey of Food Intake for Individuals, in conjunction with other data sources, may allow the Agency to bring probabilistic modeling to this issue. The existence of statistical central tendency and range data for chronic consumption suggests the possibility of a reasonable first-approximation estimate of chronic consumption patterns. At the very least, the Agency should develop a probabilistic chronic consumption model so that the number, type, and data quality of the variables involved are more apparent and open to discussion. The prohibition of an activity is likely to stifle necessary research and development efforts. In light of all the discussions involving the extreme tails of distributions, the statement in the background document "...probabilistic analyses ... would not be expected to significantly alter conclusions based on central tendencies..." appears inappropriate.

The adequacy of using acute exposure data to obtain a "probability assessment of chronic dietary risk" depends on the specific question, the toxicity measure and assumptions. Assumptions of the relative importance of residue versus frequency of consumption of specific foods, general form of a dose response curve and effect of seasonality on food consumption may all affect the adequacy of extrapolating from acute exposure to chronic exposure. Given the distribution of exposures in a population at one time point, the distribution of exposures over a longer time period cannot be ascertained with confidence without some repeated measures on the same individuals. This type of information need only be obtained on a subset of the population on which cross-sectional information is available. The Agency should make an effort to obtain this type of information. Although reasonable assumptions might be made that result in crude estimates to some questions such as proportion of a population at "high risk" or long term cancer risk, these should be done cautiously, acknowledging that there is no substitute for actual data.

The Panel questioned whether the Agency has evaluated chronic exposures derived from probabilistic sampling of the acute data, and what that data set might look like. Clearly, there would be insufficient data to correlate long term intakes of certain foods in some individuals that may be present due to multiple biases (e.g. taste preferences, economic status, etc.). The Panel is interested in knowing if the impact of such biases evaluated over a longer term among a large population might be sufficiently diluted as to allow meaningful analyses. The best approach would be to develop a new data set based upon longer periods of intake assessment. Absent that possibility in the short term, the methodology proposed by the Agency for evaluating chronic dietary risk would likely result in a reasonable central tendency estimate of residue intake.

The use of a measure of variability of consumption of a particular food collected over one time period will tend to either over- or under-state real variability of consumption for longer or shorter time periods, respectively. The three-day consumption variability will tend to understate the variability for acute effects resulting from a single meal, while overstating the relevant variability for effects that depend on months or years of accumulated exposure. This issue is presented in a paper recently submitted to Risk Analysis. The author concluded that variability has dimensions - of time, geographic area, genders, ages, or other population subgroups - and these dimensions often can have important implications for the use of any particular set of variability observations for a specific real risk assessment/risk management problem. Available data on variability are often not the most directly relevant for risk assessment.

The Agency clearly would greatly benefit from food consumption data systematically collected over a variety of averaging times from single meals to several months. However, the Agency does not have all available instruments for doing some reasonable bounding assessments from the existing data base.

If a probabilistic analysis is deemed useful to decision making, the following processes should be considered:

  1. Use residue data to identify the food items that are most important for analysis. As an example, the Environmental Working Group paper Suggested Probabilistic Risk Assessment Methodology for Evaluating Pesticides with a Common Mechanism of Toxicity: Organophosphate Case Study (as discussed at a session of this meeting of the SAP) illustrates that a relatively small set of food items contribute most of the acute risk and presumably most of the chronic risk. This "screening level" step simplifies the analysis with respect to the number of items for which information on longer-term dietary habits is desired. The Agency should consider other data sets relative to consumption patterns for these limited sets of food products. It is possible to develop an approach for doing this and, if chronic risks are judged to be important to evaluate probabilistically, Agency work groups already addressing these issues could be utilized.
  2. Evaluate the available consumption data currently used by the Agency as two-day and three-day sets per individual (rather than as discreet days) to provide insight into "blocks" of dietary habits. Because data are available for the various age groups, each block of three days provides some insight into patterns over approximately 1% of the year and 10% of a month.
  3. Based on a combination of the above considerations, the Agency should develop a distribution of exposure profiles to represent annual dietary habits for the various age groups.
  4. Consider using techniques such as the micro-exposure modeling approach to represent chronic exposures to individuals who are tracked from age group to age group. The key here is to have the modeled individuals adhere to appropriate growth profiles (based on initial body sizes and other factors) and dietary habits.
  5. If the above appears too difficult to undertake, consider at least exploring the two and three-day block data in a manner that would yield age-specific estimates of dose on an annual basis.

3) Issue: Composite samples and acute exposure estimates

Question: Does the Panel agree with the Agency s position that it is inappropriate to use monitoring data to assess acute dietary exposure? If the Panel does not agree with the Agency, can the Panel recommend a process to utilize composite samples from FDA, USDA, and State pesticide monitoring data to assess acute dietary exposure?

As presented in the background document, the approach articulated by the Agency seems appropriate and protective. The Panel agrees with the Agency that the use of monitoring data derived from composite samples seems inappropriate for direct use in acute dietary unchanged exposures for those reasons described.

It is clear that if the Agency is protecting against single-day exposures, it would be inappropriate to utilize composite samples for evaluating acute risks. The appropriate dose-response metric to be used in such cases is that related to a one-day exposure. Alternatively, if toxicological effects are known to be manifested within a day and residence in the body is less than 24 hours, then it makes sense to evaluate acute exposures against acute dose-response metrics. However, if the dose-response metric that is being used is based on an extended exposure (many days or months), then the one-day exposure event may not be the relevant exposure metric (although it is more likely to show risks and therefore be more protective). In such cases, a short-term (a few days or weeks) average exposure may be more relevant. If exposures on the order of days to weeks are more relevant for estimating exposure than are single days or meals, then composite data (including monitoring data) are appropriate for use.

It would be incorrect to use these data from composite samples without adjustment. However, single-serving samples sometimes being much larger than monitoring samples could possibly be analyzed to give at least some quantitative clues to the amount of change in lognormal variance that would be needed to make this translation for specific foods. The residue concentrations themselves are likely to be susceptible to similar treatments (after allowance for the truncation of the distribution by "non-detect" values). If there are single-serving residue distribution data for some foods, these data could be analyzed in the same way as the residue distribution for composites. A body of experience could then be assembled on what the relationship is likely to be between lognormal variances of residue level distributions for various sizes of composite samples versus single serving samples of the foods of interest. This is perhaps somewhat adventurous, and subject to some uncertainty, but it would be possible to at least get some initial risk analysis that could be refined with better data if the circumstances seemed to warrant it.

Assuming that the characterization of field trial data as composite measurements of samples obtained "at the farm gate" means that they would be derived from multiple samples exposed to pesticides under identical (or very similar) conditions, any dilutional effects from untreated or under treated samples would be minimized. It should be noted, however, that in some instances this may not be true, in that some treatment methodologies can result in widespread variability in the application of pesticides. Thus, it seems that some caution should be applied to these data and that where appropriate, non-composite samples might be preferable for evaluating residue levels for single serving products. In some cases, however, the use of these data as a source of residue concentrations for subsequent analysis of potential risk as described, appears to be sufficiently conservative.

The Panel also discussed the distribution of field trial data. If sufficient data are available to determine an average or 95th percentile field trial value, are there not enough to suggest a credible range for such data? Could such a credible range serve as the basis for a "distribution" of such values? There are at least two schools of thought operating here; one selects no distributions (and hence recognizes no uncertainty and/or variability) until the data have achieved some state of "perfection" and another (which the Panel prefers) recognizes uncertainty and/or variability, as early in the analysis as possible. This latter approach leads to an earlier understanding of what is known about the system being modeled and better supports sensitivity analyses. Most importantly, it prevents results (model outputs) from being mistaken as "absolute" or "highly certain" values.

4) Issue: Market share adjustments for acute dietary but not for short- and intermediate - term exposure assessments

Questions: Does the Panel agree with the Agency s position that it is appropriate to assess acute dietary risk on a population basis and to assess short- and intermediate - term occupational and residential exposure on an exposed-individual basis? If the Panel considers it more appropriate to assess short- and intermediate term occupational and residential risk on a population basis, can the Panel recommend a process to do so?

The Panel acknowledged that the September, 1995 SAP endorsed the Agency's acute dietary assessment approach focusing on population risk, not risk to the most highly exposed individual. Thus, the current Panel was reluctant to disagree with September, 1995 SAP decision concerning acute dietary assessment. However, there may be significant impacts on highly exposed subpopulations that, while insignificant on a statistical basis to the overall population, may be highly undesirable for this high risk population. Therefore, it is important that sufficient guidance be provided, such that effects on subpopulations can be fully assessed and documented.

Because of the potential for significant differences in occupational exposures, the Panel agrees with the Agency suggesting that risk to occupational exposures be assessed separately for individual exposed subpopulations. In particular, the lengths of working days and the age structure of the working population (e.g., including children) may be different from that commonly used for occupational exposures. On the other hand, such delineation for residential exposures does not appear to be as clearly warranted. Thus, the analysis of residential exposures may be more appropriately assessed on a population basis.

In its deterministic assessments of post-application exposures, the Agency should recognize that "farm workers" may work more than 8 hours/day and have body weights less than 60-70 kg. Again, the focus is on a more realistic assessment of who is actually exposed.

FOR THE CHAIRPERSON:
Certified as an accurate report of findings:

Paul I. Lewis
Designated Federal Official
FIFRA/Scientific Advisory Panel
DATE:_____________________

Scientific Advisory Panel (SAP) March 1998 Meeting Final Report


Local Navigation


Jump to main content.