Sunday, April 26, 2015

Focusing on Residential Air Monitoring

I am often asked to review data on environmental exposure to a specific chemical as part of a human health risk assessment for the general population to that chemical.   This usually means reviewing the available scientific literature for the existence of the chemical in various medium including ambient and indoor air, water, environmental surfaces, soil and indoor dust.   The basic idea is to determine where the chemical might be showing up and whether the concentrations being measured might present a risk to people.  The premise is fairly simple.  It is assumed that people:
  • Breath air
  • Touch surfaces with various body parts
  • Have hand-to-mouth activity that results in ingested dirt, dust and their contaminants

The literature search includes where and in what products the chemical is used which provides some insight into where and how exposures might occur.

The heart of the search is typically monitoring studies done on the above mentioned media in homes and other occupied spaces.   Since we spend most of our time indoors, that part of the process makes sense to me.   We are then presented with data on the measured concentrations in these media.  The data are usually highly variable with a large range of concentrations and here is where the difficulty begins for me. 

If the media sampled were taken at random then how do I determine if enough samples were taken to present a reasonably complete picture?   Imagine, if you will, a situation in which only 1% of the US population is being overexposed in their homes to Compound X.    That represents over 3 million people.   The binomial theorem advises us how big a sample one would need to take to see a 1% response.   If any of my readers would like me to go through the math I can do so in a future blog.  It is a valuable math tool and I would be happy to do so.   If, however, you are willing to trust that I did it properly, I can tell you that taking 100 samples (i.e., sampling 100 homes) at random has a 37% chance (that is, greater than 1 in 3 probability) of NOT seeing the 1 house in 100 that is being overexposed to the chemical.  
So how many houses have to be sampled at random to find the 1 in 100 with a confidence level of 95%  of seeing it or only a 5% chance of NOT seeing it?   That answer is a sample of 300 houses.   These are not whimsical predictions, the binomial theorem is as real as it gets.  It is why neither insurance companies will lose money on their policies nor will casinos lose on their games of chance.

I know of few monitoring studies that have gone into more than a few score of homes and none that have sampled in 300.

What is the lesson in all this?

There are two big lessons in this for me.  One, never discount a HIGH value unless you have real evidence that it truly is an artifactual outlier.   

The second lesson, is that we need to do a better job of understanding the potential sources of the chemical then forgo at least some random sampling but go directly to and sample houses with these sources. 

If the concern is acute exposure then the sampling needs to happen proximate to the initiation of a fresh source with potentially waning strength.  

If the concern is the migration of a semi-volatile organic compound (SVOC) out of a plastic or foam matrix and into dust, then be sure to give the source enough time to reach equilibrium before sampling, perhaps as long as a few years.

All of the above monitoring could be helped and informed by laboratory experiments and modeling but you heard that song from me before.

I would love to hear your comments on these issues and any experience you might have with them. 

Sunday, April 19, 2015

Models and Monitoring are NOT Enemy Camps

A recent blog here asserting that modeling can be more accurate than monitoring may have, as a result of its title, unfortunately enhanced the old notion that modeling and monitoring are at odds with one another.   The blog was written because many consider that monitoring is the “gold standard” and that monitoring will never be accepted as a reasonable substitute for this “proper” characterization of exposure. The truth is that modeling alone, absent field or experimental work to monitor exposure scenarios, to implement, evaluate and refine the models, is a relatively anemic activity.

It is true than one can use “first principles” related to known physical properties of the materials along with accounting procedures that keep track of how much substance might be going into and out of any volume of air but these are all dependent on what I call sub-models.  We need to understand such critical "monitored" realities as:

  •   How the air is moving relative to its velocity and volumetric  rate
  •   The characteristics of the emitting source:
    •  how big is it?
    • is it a point or an area?
    •  the rate of emission as a function of time
    • competing sources within the scenario

All of these require at least some level of experimentation, data gathering (i.e., monitoring) to properly implement the model.  After this phase, the model output needs to be evaluated with the MONITORING of the exposure potential.  If the model got it essentially right, the monitoring will show this.  If not, then the model builders should gain some insight from the monitoring results as to how to improve the model.   It is clearly an iterative process where the monitoring continually shows the model builders where the model needs improving.

Once the model is developed, however, it should really help to inform the monitoring practitioner as to where he or she needs to monitor and, more important, where they do NOT need to monitor. The typical Industrial Hygienist (IH) in an industrial facility is often faced with perhaps hundreds or at least scores of “monitoring opportunities”.   These are scenarios that might result in significant exposure to workers.    Given the practical limitations of available resources, he or she will simply not be able to monitor everything everywhere.  Normally, the IH in this situation applies “expert judgement” to eliminate and exempt the majority of scenarios of undergoing monitoring.   Indeed, John Mulhausen has made the critical point that the typical number of exposure samples taken relative to exposure scenarios is ZERO.
So how does an IH, who only does monitoring, decide where to monitor?   Well, some scenarios are obvious when at least one of the following factors are present:

  • The workers are showing symptoms of overexposure
  • The chemical is highly toxic (low OEL)
  • The process
    •  Is fast (producing relatively high levels of product and  airborne contaminant)
    •  Consists of a considerable amount of volatile or dusty  material
    •  Is relatively open or “leaky”
    •  occurs at elevated temperature
Whether they realize it or not, I believe that many, if not most, IH practitioners in this situation are applying their own personal "experience model" to estimate whether the ratio of potential exposure to the exposure limit for the chemical is significant. If this subliminal model tells them that the ratio can be close to or greater than one then they typically move forward to monitor the scenario. 

What my colleagues and I have been asking for quite a few years now is: Why run a subliminal model when they can use explicit mathematical models with all of their advantages to inform these decisions? 

The bottom line is that modeling and monitoring are not separate camps but really are inextricably connected and feed each other within the process.


Tuesday, April 14, 2015

ERATA Regarding the last post: Drs. Crump and Berman were Contractors for the EPA

Below in an email from Dr. Frank Mirer correcting my mistaken impression that Drs. Crump and Berman were contractors for clients with commercial interests in asbestos when, in fact, they were hired as contractors by the EPA.    My apology for the misinformation.

Regarding the asbestos piece. First, thanks for noticing it and your thoughtful comments.

Second, more important, when I wrote that Kenny Crump was a “contractor,” I had intended to convey that he was an EPA contractor, not writing a study for management. Can you correct this? This support is disclosed in one of the papers (although the authors do disclose support from a management group as well). In recent years, EPA has commissioned scientific documents of this type which were subsequently published in peer reviewed journals. Kenny Crump co-authored a commentary on formaldehyde for EPA which was helpful to the precautionary side, and which was (in my opinion) validated by recent new knowledge. In the asbestos case, increased risk estimates for amphiboles would support lower tolerances for Libby asbestos, which would be precautionary for Libby and which was EPA’s main concern.
[Can you include the above in your next post, and, if you have an email, forward it to Kenny. Thanks.]

Sunday, April 12, 2015

OELs and Politics

I often stated that I believe that the setting of Occupational Exposure Limits is a political process; however, just as important as the politics, it is a process that needs to be informed by science.   This fact came into sharp focus for me when I read a recent article in this month’s (April 2015) issue of The Synergist (a publication of the American Industrial Hygiene Association).    The articles is entitled “ABCs and Asbestos Risk Assessment” by Dr. Frank Mirer.   Frank walks us through the available science and provides his conclusions concerning the potential for less risk from chrysotile asbestos than for other forms of this mineral.   Within this article Frank reviews a controversial analysis and conclusion presented to the EPA’s Science Advisory Board by Drs. Crump and Berman.   Dr. Mirer referred to them as contractors who presumably were supported by the asbestos industry.   

Drs. Crump and Berman concluded that:
 “The best estimates of the potency of chrysotile (for mesothelioma) ranged from zero only up to 1/200th of the potency of amphibole asbestos… Furthermore, the hypothesis that chrysotile does not cause mesothelioma could not be rejected in any analysis that allowed at least some amphibole contamination in the locations where exposures were principally to chrysotile…(F)or lung cancer … the best estimates of the potency of chrysotile were at least six-fold smaller than the corresponding estimates for amphibole asbestos.” 
I strongly recommend that you obtain and read the article by Frank where he outlines and presents the reasons for his primary conclusion; namely, that even if chrysotile is somewhat less potent than amphibole, a significant risk of cancer remains at the current OSHA PEL.   He also concludes that the link between chrysotile and mesothelioma has not been broken.   Reportedly, most SAB members agree with him.

For me this discussion really brings home the fact that the exposure limit setting process is, at its heart, political.   We cannot ban everything that is toxic.  However, there may be a reasonable argument for banning asbestos given our current state of control and assessment technology. 

I think it is very healthy for the process that accomplished and capable technologists like Drs. Crump and Berman present these arguments in the service of economic interests just as long as their intellectual treatments and suppositions are completely open properly vetted.   My sense is that this happened in this case.

It may be entirely possible that chrysotile does not cause mesothelioma in humans.  From what I can determine, it simply has not been satisfactorily proven in the context of a reasonably precautionary approach.   Perhaps one day we will have tools that allow it to be proven to a reasonable scientific and political certainty, until then we default to considering it to be a cause of this dreaded and invariably fatal disease.

Although many of our politicians have not acquitted themselves well of late, politics per se is not a dirty word.  It should be a noble endeavor that is ideally how democracies are supposed to settle questions of the general public good.   Indeed, given the uncertainty that we are constantly facing in the realm of human risk assessment, there are many issues that cannot be answered with certainty but that must be decided.   We cannot force all exposures and risks from chemicals to zero; however, we can attempt to estimate, limit and equitably balance allowable exposure (i.e., exposure limits) with the benefits derived from these exposures.  I believe that to ensure the integrity of the process, we need to do this while also admitting the limits of our scientific knowledge and the inevitable fact that our knowledge will get better with time

Sunday, April 5, 2015

Modeling Breathing Zone Exposure is More Accurate than Monitoring!

Modeling has a bad reputation.  Indeed, the conventional wisdom is that monitoring is the Gold Standard for exposure assessment. That is, the estimation of human exposure via the modeling of breathing zone concentrations has generally been considered to be inferior to the direct measurement of the breathing zone air.   On the face of it, this attitude and conclusion seems to make perfect sense.   After all, models are simply more or less general constructs of reality, while measurement actually samples the reality of interest.   What could be clearer? 

Well it turns out to be not so clear.   The practical reality (spell that C-O-S-T) of monitoring means that not many samples will be taken to provide the estimated exposure.   Thus, when one does a rational statistical analysis of what one gets from typical monitoring data versus what one gets from modeling, modeling comes out on top. 

All of this has been laid out in strong mathematical and logical detail in an analysis performed by Dr. Mark Nicas.    It was presented in a 2002 paper (Uncertainty in Exposure Estimates Made by Modeling Versus Monitoring, AIHA Journal 63:275–283 (2002)).   I will be happy to send at pdf copy of this paper to whoever asks me for it at   Mark is a brilliant statistician and modeler.   Indeed, he has developed and tirelessly promoted the 2-zone model of indoor airborne concentration while providing a very strong technical rationale for the relative strength and superiority of modeling versus standard monitoring data for typical exposure scenarios.  I was the second author on the paper but Mark preformed all the “heavy lifting” relative to the development of the mathematical rationale. 

I am, of course, biased but I think the paper should have gotten a lot more interest that it did; however, given the complexity of the analysis I can perhaps understand why it did not get more attention.  What is nice about this blog is that I can dust off this work and present it in a summary form to a new audience and in a new light.   

Mark showed, very logically, that for a sample size of three or fewer workdays, mathematical modeling rather than air monitoring should provide a more accurate estimate of the long-term mean inhalation exposure level if the anticipated geometric standard deviation (GSD) for the distribution of airborne concentrations exceeds 2.3.   When the number of samples is n=1 this was true for GSD > 1.7.   Paul Hewitt in his publication: Interpretation and Use of Occupational Exposure Limits for Chronic Disease Agents. in Occupational Medicine: State of the Art Reviews, 11(3) July-Sept (1996) online version:
tells us that “The range of GSDs - 1.5 to 3.0 - covers the range of most ‘within-worker’ GSDs commonly observed in practice.”   My experience has been that the majority of these are on the upper end of this range.

Certainly modeling will do a better job than monitoring when the typical number of monitoring samples taken in most scenarios is    n=0 as effectively and convincingly presented by John Mulhausen a number of years ago. 

The concluding paragraphs from the 2002 paper is reproduced below:

A framework has been described for comparing uncertainty in estimates of the long-term mean exposure level made by modeling versus monitoring. Although not developed here, a related approach can be used to compare estimates of other exposure parameters such as the 95th percentile of the C distribution. The NF exposure model was used to illustrate the framework, but the authors recognize that different models are more realistic for other scenarios, and that the time-activity pattern of the exposed employee must always be considered. Central to the utility of the modeling approach is that the model be an appropriate physical descriptor of the contaminant emission rate function, the pattern of contaminant dispersion in room air, and the manner of removal from room air. Because the traditional focus of industrial hygiene has been monitoring rather than modeling, the available mechanistic (physical-chemical) indoor air exposure models have not been systematically investigated and validated. In turn, this lack of research leaves much present-day uncertainty regarding source emission rates, dispersion patterns in air, and sink effects.
Based on preliminary analysis, the authors argue that directing research funds to developing and validating mechanistic exposure models will ultimately provide cost-effective exposure assessment tools. Their availability would encourage more assessments of compliance with OSHA permissible exposure limits in small- and medium-sized workplaces that lack the dedicated services of an industrial hygienist. Further, validated models would have benefits beyond OSHA-related compliance determinations. Employers could proactively use such models to devise appropriate exposure controls in planning new processes and operations. Consumer product and environmental regulatory agencies could use models to assess the safety of products that release airborne toxicants. Mathematical models could be applied to epidemiological studies for retrospective exposure estimation when past monitoring data are poor in quality or nonexistent, as is often the case. At a minimum, validated models could be used to rank average exposure levels by task/job, and if sufficient information were available, could provide quantitative estimates for exploring dose-response relationships. (emphasis added)