Modeling has
a bad reputation. Indeed, the conventional wisdom is that monitoring is the Gold Standard for exposure assessment. That is, the
estimation of human exposure via the modeling of breathing zone concentrations
has generally been considered to be inferior to the direct measurement of the
breathing zone air. On the face
of it, this attitude and conclusion seems to make perfect sense. After all, models are simply more or less
general constructs of reality, while measurement actually samples the reality
of interest. What could be clearer?
Well it
turns out to be not so clear. The
practical reality (spell that C-O-S-T) of monitoring means that not many
samples will be taken to provide the estimated exposure. Thus, when one does a rational statistical
analysis of what one gets from typical monitoring data versus what one gets from modeling, modeling comes out on top.
All of this has been laid out in strong mathematical and logical
detail in an analysis performed by Dr. Mark Nicas. It was presented in a 2002 paper (Uncertainty
in Exposure Estimates Made by Modeling Versus Monitoring, AIHA Journal 63:275–283 (2002)).
I will be happy to send at pdf
copy of this paper to whoever asks me for it at mjayjock@gmail.com. Mark is a brilliant statistician and
modeler. Indeed, he has developed and
tirelessly promoted the 2-zone model of indoor airborne concentration while
providing a very strong technical rationale for the relative strength and
superiority of modeling versus standard monitoring data for typical exposure
scenarios. I was the second author
on the paper but Mark preformed all the “heavy lifting” relative to the
development of the mathematical rationale.
I am, of course, biased but I think the paper should have gotten a lot
more interest that it did; however, given the complexity of the analysis I can
perhaps understand why it did not get more attention. What is nice about this blog is that I can
dust off this work and present it in a summary form to a new audience and in a
new light.
Mark showed, very logically, that for a sample size of three or fewer workdays, mathematical modeling rather than air monitoring should provide a more accurate estimate of the long-term mean inhalation exposure level if the anticipated geometric standard deviation (GSD) for the distribution of airborne concentrations exceeds 2.3. When the number of samples is n=1 this was true for GSD > 1.7. Paul Hewitt in his publication: Interpretation and Use of Occupational Exposure Limits for Chronic Disease Agents. in Occupational Medicine: State of the Art Reviews, 11(3) July-Sept (1996) online version:
Mark showed, very logically, that for a sample size of three or fewer workdays, mathematical modeling rather than air monitoring should provide a more accurate estimate of the long-term mean inhalation exposure level if the anticipated geometric standard deviation (GSD) for the distribution of airborne concentrations exceeds 2.3. When the number of samples is n=1 this was true for GSD > 1.7. Paul Hewitt in his publication: Interpretation and Use of Occupational Exposure Limits for Chronic Disease Agents. in Occupational Medicine: State of the Art Reviews, 11(3) July-Sept (1996) online version:
tells us that “The range of GSDs - 1.5 to 3.0 - covers the range of most
‘within-worker’ GSDs commonly observed in practice.” My experience has been that the majority of
these are on the upper end of this range.
Certainly modeling will do a better job than monitoring when the typical number of monitoring samples taken in most scenarios is n=0 as effectively and convincingly presented by John Mulhausen a number of years ago.
The concluding paragraphs from the 2002
paper is reproduced below:
A framework has been described for comparing uncertainty in estimates of the long-term mean exposure level made by modeling versus monitoring. Although not developed here, a related approach can be used to compare estimates of other exposure parameters such as the 95th percentile of the C distribution. The NF exposure model was used to illustrate the framework, but the authors recognize that different models are more realistic for other scenarios, and that the time-activity pattern of the exposed employee must always be considered. Central to the utility of the modeling approach is that the model be an appropriate physical descriptor of the contaminant emission rate function, the pattern of contaminant dispersion in room air, and the manner of removal from room air. Because the traditional focus of industrial hygiene has been monitoring rather than modeling, the available mechanistic (physical-chemical) indoor air exposure models have not been systematically investigated and validated. In turn, this lack of research leaves much present-day uncertainty regarding source emission rates, dispersion patterns in air, and sink effects.
Based on preliminary analysis, the authors argue that directing research funds to developing and validating mechanistic exposure models will ultimately provide cost-effective exposure assessment tools. Their availability would encourage more assessments of compliance with OSHA permissible exposure limits in small- and medium-sized workplaces that lack the dedicated services of an industrial hygienist. Further, validated models would have benefits beyond OSHA-related compliance determinations. Employers could proactively use such models to devise appropriate exposure controls in planning new processes and operations. Consumer product and environmental regulatory agencies could use models to assess the safety of products that release airborne toxicants. Mathematical models could be applied to epidemiological studies for retrospective exposure estimation when past monitoring data are poor in quality or nonexistent, as is often the case. At a minimum, validated models could be used to rank average exposure levels by task/job, and if sufficient information were available, could provide quantitative estimates for exploring dose-response relationships. (emphasis added)
No comments:
Post a Comment