LinkedIn

Monday, May 11, 2015

The Rule-of-10 in Occupational Exposure to VOCs

Some individuals are blessed with a rare combination of technical and managerial acumen along with a gift for seeing the big picture.  In the realm of Industrial Hygiene I can think of no one who personifies  these traits better than Mark Stenzl.   During Mark’s long and very productive career he developed the “Rule-of-10” for inhalation exposure to volatile organic substances.  The rule is presented below:


Level of Control
Fraction of Saturation Vapor Concentration (SVC)
Confined Space – Virtually no circulation
1/10th of Saturation
Poor – Limited Circulation
1/ 100th of Saturation
Good – General
  ~ 6 air turnovers/hr.
1/1,000th of Saturation
Capture
1/10,000th of Saturation
Containment
1/100,000th of Saturation

You may remember that:
Saturation Concentration (ppm v/v) = 
(Vapor Pressure/Atmospheric Pressure)(1,000,000)

Below are some comments Mark sent to me when I asked him if I could write a blog about it.  After he said it was OK to write about it, he provided the following background:

“I have been using the Rule-of-10 for at least 35 years.  It was a basic component of our qualitative exposure assessment process at both Celanese and later at OxyChem.  We sent every exposure scenario through this qualitative process and predicted the likely exposure category (similar to AIHA’s except we had an extra category 0.1 to 0.25 times the OEL and then 0.25 to 0.5 time the OEL). We then established our sample strategy based on this assignment.  If we thought that exposures were above 0.5 times the OEL we collected at least 5 samples evenly over the year.  If our predicted exposure was between 0.25 and 0.50 we collected at least 4 samples and if the predicted exposures were between 0.1 and 0.25 times the OEL we collected 3 samples and for all exposures thought to be less than 0.1 times the OEL, we would randomly pick out 10 SEGs [Similar Exposure Groups] and collect 3 samples.  Usually, on an annual basis, we would analyze all of our data and prepare an interpretive statement for each facility. Part of the analysis was to evaluate how well our qualitative assessment performed.  We did consider several other determinants of exposure in our algorithm beyond the Rule- of-10.  They included the observed level of control, frequency and duration of the exposure scenario, the hazard vapor ratio and the particulate hazard index.  I found that the Rule-of-10 to be amazingly accurate considering how simple it is. I have found the rule to be very beneficial in other applications such as the chemical approval process related to the introduction of new chemicals into the workplace or the change of existing processes; in auditing IH programs; in conducting due diligence related to potential acquisitions; and in inspections.” 

Further information on the implementation of Mark’s Rule-of-10 along with a check-list approach, designed to provide a more detailed analysis, developed by Susan Arnold will be included in the upcoming 4th Edition of the AIHA Exposure Strategies Book.   My sense is that these additions alone will be worth the price of this basic reference that every Industrial Hygienist should have.

In another note to me, Mark makes the following points: 

“In the examples of the application of the Rule-of-10 [in the 4th Edition] …, the IH may only be in the workplace a few minutes and likely does not have access to a “good” basic characterization.  Even in these cases, if the IH only knows what chemicals are present (and approximate composition), have an OEL or at least an estimate of an OEL and know the vapor pressure (information that is readily available) they can use the Rule of 10 as a screen.  That is, they know what type of controls and configurations they should be observing and match that up with what they are observing.  So the end point is not to classify the exposure into the correct exposure category, but rather to use it as a tool to raise “red flags”; where they should ask more questions and where they need to do a more formal exposure assessment.”

The 4th Edition of the Exposure Strategies Book should be out this year.  I heartily recommend you get it.

  


Sunday, May 3, 2015

OELs: The “HOT” Topic at this year’s AIhce in Salt Lake City

Those of you who read this blog regularly know that I am very interested in exposure limits, especially Occupational Exposure Limits (OELs).   They represent literally half of the risk assessment process done by most Industrial Hygienists when they compare the exposures they measure (EXP) to the OEL in the classic hazard index (HI) = EXP/OEL.     HI >1  L  HI < 1  J

As such, OELs are critical to the IH profession.
The histories of OELs as a whole is that they have almost invariably have come DOWN in value with time.  Indeed, I can only think of a single example where an OEL went UP.   That fact indicates to me that we have not done a particularly good job of understanding and explaining the uncertainty associated with our OELs.

Understanding the uncertainties associated with OELs and communicating the nature and size of the uncertainty as a part of the documentation has been a point I have been trying to make for some time.

In June I get to present some of these ideas again at the 2015 American Industrial Hygiene Conference & Exposition (AIHce) in Salt Lake City May 30 – June 4, 2015.


On Sunday, starting a 8am, I get to teach a Professional Development course with my friend and colleague Dr. Andy Maier (University of Cincinnati) and Dr. Dan Arieta (Chevron Phillips).   These folks are are a rare breed in the ranks of Industrial Hygiene;  namely, they are board certified toxicologists.  For my part,  I am less expert in the realm of toxicology by very enthusiastic about the subject as it relates to OELs.   The title of the PDC is:  The Hierarchy of Occupational Exposure Limits (OELs): A Suite of Tools for Assessing Worker Health Risk  (PDC402).    Some discussion topics with the presenters in this PDC are shown below:
Why the OEL Hierarchy Concept
Maier
Concepts of Toxicology for OEL users
Arieta
The WEELs Keep Turning -  Setting Traditional OELs
Maier
To Be a Savvy OEL User – OEL Limitations
Jayjock
Activity - The Great OEL Debate
All
REACH DNELs
Arieta
OEBs and other hazard-based techniques
Maier
Risk-Based OELs
Jayjock

On  Monday morning I get to talk about Risk Based OELs and the Risk@OEL during a Science Symposium Session entitled Occupational Exposure Banding (OEBs): The Solution for the Glacial Pace of OELs  from 10:30 am to 12:50 am.   The title of my talk will be:  Risk@OEL: An Approach to Conduct Tier 3 OEBs

My last presentation is Roundtable RT236 "Toxicological Challenges to the Derivation and Application of Occupational Exposure Limits (OELs)".   It will happen on Wednesday afternoon from 1 to 4:30pm.  I will be talking about  Aligning Occupational Exposure Limits (OELs) with Exposures, especially as they relate to bolus exposures.


I will write about the bullet points of these talks and offer the slides from them in future blogs.   In the meantime,  I am looking forward to meeting some of you at these events and hearing from you about what future topics you want to see in this blog and about what aspects of OELs are important to you.     

Sunday, April 26, 2015

Focusing on Residential Air Monitoring

I am often asked to review data on environmental exposure to a specific chemical as part of a human health risk assessment for the general population to that chemical.   This usually means reviewing the available scientific literature for the existence of the chemical in various medium including ambient and indoor air, water, environmental surfaces, soil and indoor dust.   The basic idea is to determine where the chemical might be showing up and whether the concentrations being measured might present a risk to people.  The premise is fairly simple.  It is assumed that people:
  • Breath air
  • Touch surfaces with various body parts
  • Have hand-to-mouth activity that results in ingested dirt, dust and their contaminants

The literature search includes where and in what products the chemical is used which provides some insight into where and how exposures might occur.

The heart of the search is typically monitoring studies done on the above mentioned media in homes and other occupied spaces.   Since we spend most of our time indoors, that part of the process makes sense to me.   We are then presented with data on the measured concentrations in these media.  The data are usually highly variable with a large range of concentrations and here is where the difficulty begins for me. 

If the media sampled were taken at random then how do I determine if enough samples were taken to present a reasonably complete picture?   Imagine, if you will, a situation in which only 1% of the US population is being overexposed in their homes to Compound X.    That represents over 3 million people.   The binomial theorem advises us how big a sample one would need to take to see a 1% response.   If any of my readers would like me to go through the math I can do so in a future blog.  It is a valuable math tool and I would be happy to do so.   If, however, you are willing to trust that I did it properly, I can tell you that taking 100 samples (i.e., sampling 100 homes) at random has a 37% chance (that is, greater than 1 in 3 probability) of NOT seeing the 1 house in 100 that is being overexposed to the chemical.  
  
So how many houses have to be sampled at random to find the 1 in 100 with a confidence level of 95%  of seeing it or only a 5% chance of NOT seeing it?   That answer is a sample of 300 houses.   These are not whimsical predictions, the binomial theorem is as real as it gets.  It is why neither insurance companies will lose money on their policies nor will casinos lose on their games of chance.

I know of few monitoring studies that have gone into more than a few score of homes and none that have sampled in 300.

What is the lesson in all this?

There are two big lessons in this for me.  One, never discount a HIGH value unless you have real evidence that it truly is an artifactual outlier.   

The second lesson, is that we need to do a better job of understanding the potential sources of the chemical then forgo at least some random sampling but go directly to and sample houses with these sources. 

If the concern is acute exposure then the sampling needs to happen proximate to the initiation of a fresh source with potentially waning strength.  

If the concern is the migration of a semi-volatile organic compound (SVOC) out of a plastic or foam matrix and into dust, then be sure to give the source enough time to reach equilibrium before sampling, perhaps as long as a few years.

All of the above monitoring could be helped and informed by laboratory experiments and modeling but you heard that song from me before.


I would love to hear your comments on these issues and any experience you might have with them. 

Sunday, April 19, 2015

Models and Monitoring are NOT Enemy Camps

A recent blog here asserting that modeling can be more accurate than monitoring may have, as a result of its title, unfortunately enhanced the old notion that modeling and monitoring are at odds with one another.   The blog was written because many consider that monitoring is the “gold standard” and that monitoring will never be accepted as a reasonable substitute for this “proper” characterization of exposure. The truth is that modeling alone, absent field or experimental work to monitor exposure scenarios, to implement, evaluate and refine the models, is a relatively anemic activity.

It is true than one can use “first principles” related to known physical properties of the materials along with accounting procedures that keep track of how much substance might be going into and out of any volume of air but these are all dependent on what I call sub-models.  We need to understand such critical "monitored" realities as:

  •   How the air is moving relative to its velocity and volumetric  rate
  •   The characteristics of the emitting source:
    •  how big is it?
    • is it a point or an area?
    •  the rate of emission as a function of time
    • competing sources within the scenario

All of these require at least some level of experimentation, data gathering (i.e., monitoring) to properly implement the model.  After this phase, the model output needs to be evaluated with the MONITORING of the exposure potential.  If the model got it essentially right, the monitoring will show this.  If not, then the model builders should gain some insight from the monitoring results as to how to improve the model.   It is clearly an iterative process where the monitoring continually shows the model builders where the model needs improving.

Once the model is developed, however, it should really help to inform the monitoring practitioner as to where he or she needs to monitor and, more important, where they do NOT need to monitor. The typical Industrial Hygienist (IH) in an industrial facility is often faced with perhaps hundreds or at least scores of “monitoring opportunities”.   These are scenarios that might result in significant exposure to workers.    Given the practical limitations of available resources, he or she will simply not be able to monitor everything everywhere.  Normally, the IH in this situation applies “expert judgement” to eliminate and exempt the majority of scenarios of undergoing monitoring.   Indeed, John Mulhausen has made the critical point that the typical number of exposure samples taken relative to exposure scenarios is ZERO.
  
So how does an IH, who only does monitoring, decide where to monitor?   Well, some scenarios are obvious when at least one of the following factors are present:

  • The workers are showing symptoms of overexposure
  • The chemical is highly toxic (low OEL)
  • The process
    •  Is fast (producing relatively high levels of product and  airborne contaminant)
    •  Consists of a considerable amount of volatile or dusty  material
    •  Is relatively open or “leaky”
    •  occurs at elevated temperature
Whether they realize it or not, I believe that many, if not most, IH practitioners in this situation are applying their own personal "experience model" to estimate whether the ratio of potential exposure to the exposure limit for the chemical is significant. If this subliminal model tells them that the ratio can be close to or greater than one then they typically move forward to monitor the scenario. 

What my colleagues and I have been asking for quite a few years now is: Why run a subliminal model when they can use explicit mathematical models with all of their advantages to inform these decisions? 

The bottom line is that modeling and monitoring are not separate camps but really are inextricably connected and feed each other within the process.



·          

Tuesday, April 14, 2015

ERATA Regarding the last post: Drs. Crump and Berman were Contractors for the EPA

Below in an email from Dr. Frank Mirer correcting my mistaken impression that Drs. Crump and Berman were contractors for clients with commercial interests in asbestos when, in fact, they were hired as contractors by the EPA.    My apology for the misinformation.

Hi
Regarding the asbestos piece. First, thanks for noticing it and your thoughtful comments.

Second, more important, when I wrote that Kenny Crump was a “contractor,” I had intended to convey that he was an EPA contractor, not writing a study for management. Can you correct this? This support is disclosed in one of the papers (although the authors do disclose support from a management group as well). In recent years, EPA has commissioned scientific documents of this type which were subsequently published in peer reviewed journals. Kenny Crump co-authored a commentary on formaldehyde for EPA which was helpful to the precautionary side, and which was (in my opinion) validated by recent new knowledge. In the asbestos case, increased risk estimates for amphiboles would support lower tolerances for Libby asbestos, which would be precautionary for Libby and which was EPA’s main concern.
[Can you include the above in your next post, and, if you have an email, forward it to Kenny. Thanks.]

Sunday, April 12, 2015

OELs and Politics

I often stated that I believe that the setting of Occupational Exposure Limits is a political process; however, just as important as the politics, it is a process that needs to be informed by science.   This fact came into sharp focus for me when I read a recent article in this month’s (April 2015) issue of The Synergist (a publication of the American Industrial Hygiene Association).    The articles is entitled “ABCs and Asbestos Risk Assessment” by Dr. Frank Mirer.   Frank walks us through the available science and provides his conclusions concerning the potential for less risk from chrysotile asbestos than for other forms of this mineral.   Within this article Frank reviews a controversial analysis and conclusion presented to the EPA’s Science Advisory Board by Drs. Crump and Berman.   Dr. Mirer referred to them as contractors who presumably were supported by the asbestos industry.   

Drs. Crump and Berman concluded that:
 “The best estimates of the potency of chrysotile (for mesothelioma) ranged from zero only up to 1/200th of the potency of amphibole asbestos… Furthermore, the hypothesis that chrysotile does not cause mesothelioma could not be rejected in any analysis that allowed at least some amphibole contamination in the locations where exposures were principally to chrysotile…(F)or lung cancer … the best estimates of the potency of chrysotile were at least six-fold smaller than the corresponding estimates for amphibole asbestos.” 
I strongly recommend that you obtain and read the article by Frank where he outlines and presents the reasons for his primary conclusion; namely, that even if chrysotile is somewhat less potent than amphibole, a significant risk of cancer remains at the current OSHA PEL.   He also concludes that the link between chrysotile and mesothelioma has not been broken.   Reportedly, most SAB members agree with him.

For me this discussion really brings home the fact that the exposure limit setting process is, at its heart, political.   We cannot ban everything that is toxic.  However, there may be a reasonable argument for banning asbestos given our current state of control and assessment technology. 

I think it is very healthy for the process that accomplished and capable technologists like Drs. Crump and Berman present these arguments in the service of economic interests just as long as their intellectual treatments and suppositions are completely open properly vetted.   My sense is that this happened in this case.

It may be entirely possible that chrysotile does not cause mesothelioma in humans.  From what I can determine, it simply has not been satisfactorily proven in the context of a reasonably precautionary approach.   Perhaps one day we will have tools that allow it to be proven to a reasonable scientific and political certainty, until then we default to considering it to be a cause of this dreaded and invariably fatal disease.


Although many of our politicians have not acquitted themselves well of late, politics per se is not a dirty word.  It should be a noble endeavor that is ideally how democracies are supposed to settle questions of the general public good.   Indeed, given the uncertainty that we are constantly facing in the realm of human risk assessment, there are many issues that cannot be answered with certainty but that must be decided.   We cannot force all exposures and risks from chemicals to zero; however, we can attempt to estimate, limit and equitably balance allowable exposure (i.e., exposure limits) with the benefits derived from these exposures.  I believe that to ensure the integrity of the process, we need to do this while also admitting the limits of our scientific knowledge and the inevitable fact that our knowledge will get better with time
.

Sunday, April 5, 2015

Modeling Breathing Zone Exposure is More Accurate than Monitoring!

Modeling has a bad reputation.  Indeed, the conventional wisdom is that monitoring is the Gold Standard for exposure assessment. That is, the estimation of human exposure via the modeling of breathing zone concentrations has generally been considered to be inferior to the direct measurement of the breathing zone air.   On the face of it, this attitude and conclusion seems to make perfect sense.   After all, models are simply more or less general constructs of reality, while measurement actually samples the reality of interest.   What could be clearer? 

Well it turns out to be not so clear.   The practical reality (spell that C-O-S-T) of monitoring means that not many samples will be taken to provide the estimated exposure.   Thus, when one does a rational statistical analysis of what one gets from typical monitoring data versus what one gets from modeling, modeling comes out on top. 

All of this has been laid out in strong mathematical and logical detail in an analysis performed by Dr. Mark Nicas.    It was presented in a 2002 paper (Uncertainty in Exposure Estimates Made by Modeling Versus Monitoring, AIHA Journal 63:275–283 (2002)).   I will be happy to send at pdf copy of this paper to whoever asks me for it at mjayjock@gmail.com.   Mark is a brilliant statistician and modeler.   Indeed, he has developed and tirelessly promoted the 2-zone model of indoor airborne concentration while providing a very strong technical rationale for the relative strength and superiority of modeling versus standard monitoring data for typical exposure scenarios.  I was the second author on the paper but Mark preformed all the “heavy lifting” relative to the development of the mathematical rationale. 

I am, of course, biased but I think the paper should have gotten a lot more interest that it did; however, given the complexity of the analysis I can perhaps understand why it did not get more attention.  What is nice about this blog is that I can dust off this work and present it in a summary form to a new audience and in a new light.   

Mark showed, very logically, that for a sample size of three or fewer workdays, mathematical modeling rather than air monitoring should provide a more accurate estimate of the long-term mean inhalation exposure level if the anticipated geometric standard deviation (GSD) for the distribution of airborne concentrations exceeds 2.3.   When the number of samples is n=1 this was true for GSD > 1.7.   Paul Hewitt in his publication: Interpretation and Use of Occupational Exposure Limits for Chronic Disease Agents. in Occupational Medicine: State of the Art Reviews, 11(3) July-Sept (1996) online version:
tells us that “The range of GSDs - 1.5 to 3.0 - covers the range of most ‘within-worker’ GSDs commonly observed in practice.”   My experience has been that the majority of these are on the upper end of this range.

Certainly modeling will do a better job than monitoring when the typical number of monitoring samples taken in most scenarios is    n=0 as effectively and convincingly presented by John Mulhausen a number of years ago. 

The concluding paragraphs from the 2002 paper is reproduced below:

A framework has been described for comparing uncertainty in estimates of the long-term mean exposure level made by modeling versus monitoring. Although not developed here, a related approach can be used to compare estimates of other exposure parameters such as the 95th percentile of the C distribution. The NF exposure model was used to illustrate the framework, but the authors recognize that different models are more realistic for other scenarios, and that the time-activity pattern of the exposed employee must always be considered. Central to the utility of the modeling approach is that the model be an appropriate physical descriptor of the contaminant emission rate function, the pattern of contaminant dispersion in room air, and the manner of removal from room air. Because the traditional focus of industrial hygiene has been monitoring rather than modeling, the available mechanistic (physical-chemical) indoor air exposure models have not been systematically investigated and validated. In turn, this lack of research leaves much present-day uncertainty regarding source emission rates, dispersion patterns in air, and sink effects.
Based on preliminary analysis, the authors argue that directing research funds to developing and validating mechanistic exposure models will ultimately provide cost-effective exposure assessment tools. Their availability would encourage more assessments of compliance with OSHA permissible exposure limits in small- and medium-sized workplaces that lack the dedicated services of an industrial hygienist. Further, validated models would have benefits beyond OSHA-related compliance determinations. Employers could proactively use such models to devise appropriate exposure controls in planning new processes and operations. Consumer product and environmental regulatory agencies could use models to assess the safety of products that release airborne toxicants. Mathematical models could be applied to epidemiological studies for retrospective exposure estimation when past monitoring data are poor in quality or nonexistent, as is often the case. At a minimum, validated models could be used to rank average exposure levels by task/job, and if sufficient information were available, could provide quantitative estimates for exploring dose-response relationships. (emphasis added)