LinkedIn

Sunday, March 29, 2015

Hope and Change in Human Health Risk Assessment

Sarah Palin once mocked our President by asking him, “How’s that Hopey Changey think working out for you?”   It struck me that hope and change or, more specifically, the hope for positive change is, or should be, an important and positive human attribute.   Indeed, I see the opposite sentiment as resignation, stagnation and despair which really does not serve anyone well.

In the interest of full disclosure, I have to admit that  I am a hard core Pollyanna, whose Cambridge Online Dictionary definition is:  person who believes that good things are more likely to happen than bad things, even when this is very unlikely.   Indeed, I have been predicting that the general widespread acceptance and application of quantitative human health risk assessment will happen within the next two years for more than 25 years!   It has been argued that this has not happened to date as the vast majority of chemical exposures and risk are not well characterized.   See previous posts: 

  • We do not estimate Quantitative Risk for Most Chemicals (May 24, 2013)
  • Why Isn’t Risk Assessment Done on All Chemicals? (October 28, 2013)
  • Exposure Modeling Data Base Needs (August 8, 2014)
  • We do NOT spend enough on Risk Assessment (November 24, 2014)

I do want to point out some of the remarkable advances that have occurred over the last 25 years within the science and practical tools of human health exposure/ risk assessment.  From my perspective, most of them have come from volunteers within the committees of the  American Industrial Hygiene Association.  The Exposure Strategies and Modeling Books and the associated modeling software have been great advances on the exposure assessment side.   I think most of my colleagues, however, will agree that we could do a lot more given more resources. 
 
On the toxicity side of things the situation is much worse.  Except for a few promising exemptions (like EPA ToxCAST), it has remained stagnant for years as we do not have nearly enough exposure limits and necessary improvement in their documentation do not even appear to be on the horizon.

All of this is not to say that some companies and organization do not recognize the importance of doing the exposure or toxicity work to provide good assessment of chemical risk in order to do a rational and responsible job of managing the risk.    They have what I call “enlightened self-interest”;  they understand the importance and proper place of doing this as the “right thing to do” and in the long run “good business”.   Most of my clients fall into this category; however, I must tell you that I do not have many clients.   If you want some more insight into one aspect of this reality from someone with a lot more field experience than I do, check out the following post on this blog:

IH’s Dirty Little Secret (January 26, 2014)

Indeed, some companies and their organizations see their relationship with regulators as essentially adversarial, indeed, as covertly but un-apologetically war-like.    As a Risk Assessment Scientist working for a large chemical company in the 1970s through 2003, I participated in industry group meetings where I saw this attitude first-hand.  

Below is a small and edited excerpt from a letter I wrote to a staffer for such a group many years ago asking their advice:

“I believe that it is not accurate or useful to accuse or think of the regulators as not using science.  It implies that we own the scientific truth and I tell you that we do not.  We may have some more or less sophisticated scientific information that supports a less conservative regulation but our interpretation may not be compelling to the regulators who obviously have a different perspective.  More important, they own the risk management call.   It’s not that they do not use science but the reality is that they use “their” science and not “ours”.   We need to work on understanding their perspective and their standard of proof while gaining some common ground and building consensus.
Our science policy should be one of cooperation, understanding and bridging and not confrontation.  If the science that we developed is particularly well done then historically the difference between “their” science and “ours” should narrow.   There are a few examples of this but they all involved considerable expense to do the work that needed doing.”

The letter went on to seek advice from the staffer as to whether I should ever openly voice such an opinion and advocacy for change in policy and approach.    The response was that it would not be appreciated and I was advised to not give it voice.

That was almost 15 years ago and general opposition to practical and meaningful reform of the chemical regulatory process including the need for risk assessment in this country continues to receive serious opposition from groups speaking for the industry.

I believe that the industry in general would do so much better to broadly embrace enlightened self-interest and work for rational regulation and significant development of the science.   That remains my fervent hope but I am an admitted Pollyanna.   Indeed, it is entirely possible that the cold hand of regulation may be the only force that will ultimately catalyzes and provides real change and advancement in the science.

Sunday, March 22, 2015

100 Posts on Human Health and Safety

The post this week is about blogging in general and this blog in particular.  When I started it almost two years ago, I did not think it would last for 100 blogs but it definitely has with at least a few more to go.  What I would like to do with the remainder of this post is point up some of the things I have learned and some of the very positive aspects of writing a blog.

I did not think that the blog was going to last this long because I did not think I had more than a few dozen topics that I could discuss in a blog.   I was wrong, however, the biggest factor in being wrong is that I did not count on the encouragement and inspiration I would get from you, the readers of this blog.   I started with a list of topics but that list just kept getting longer as I got more feedback.   Indeed, the interaction and networking with you, the readers of this blog, has turned out to be single most positive aspect of writing it.  Thanks to all that are helping to keep this going.

This blog has been a remarkable instrument in reaching folks, especially the professional folk I wish to interact with.   According to the statistics kept by blogger.com, readers have visited this blog almost 47,023 times in the 48 week since it started for an average of about 980 views per week.  I rarely lecture to more than 50 folks in class or perhaps 10 times that amount at a professional conference session.  Plus, I hear from blog readers from all over the World whom I would have had a very small chance of ever meeting. 

I am sure the other blogging services do this as well, but one of the aspects that I really like about Google Blogger (blogspot.com) is that it provides a complete archive of all the previous blogs such that one can page back through them and find topics that might be of interest.    As a further convenience I have cut and pasted all of the post titles to date below. 

I remain very interested in hearing your comments about any particular blog and on what topics might be of interest to you in the realm of human exposure and risk assessment to chemicals.

99 Blogs from April 2013 to March 2015

The Air Pollution We Breathe:  Where does it come from?

Risk Based OELs (RBOELs)



   


Sunday, March 15, 2015

The Air Pollution We Breathe: Where does it come from?

Over the years a number of “truths” have been reasonably well established relative to air pollution with chemicals.   One of the strongest of these is that our exposures to air pollution are typically dominated by what we breathe while indoors.    There are a number of reasons for this:
  1. We spend the vast majority of our time indoors
  2. Most outdoor pollution can penetrate to the indoor environment
  3. Mixing air change rates of fresh air are invariably lower indoors than they are outside
  4. Indoor sources of chemical pollutants often dominate over outdoors sources

One way of looking at our world from the perspective of inhalation exposure is to divide the pollution we experience  into two primary sources; namely,  near-field and far-field.    The cartoon below is designed to provide some insight into this dichotomy.


We all know and many of us are concerned about large far-field sources of air pollution such as industrial emissions from smoke stakes or from other large sources like automotive emissions, large spills or releases or forest files.    SMOG or ozone alerts fit into this category.   Indeed, the majority of regulatory effort such as laws like the Clean Air Act have  been directed at these sources.   They are clearly important; however, it has been shown that indoor sources of pollution can be just as and, in some cases, more important that the large sources.  

Over 30 years ago Lance Wallace from the EPA conducted pioneering studies under the acronym:  TEAM  or Total Exposure Assessment Methodology http://nepis.epa.gov/Adobe/PDF/2000UC5T.PDF.  
He concluded that indoor airborne concentrations resulting from essentially all eleven target chemicals that he looked at were greater than the outdoor concentrations at various locations in the US.   He and his team were the first to find that in most case, even in locations with concentrated industrial sources, the indoor sources significantly outweighed the impact of traditional far-field sources such as chemical plants, petroleum refineries, local dry cleaners or service stations.
It is worth noting that this study was done by the US EPA whose primary charge is to focus on non-occupational sources and exposures.   Presumably, the vast majority of significant exposures are even more intense at work and are from near-field sources. 

How does one measure near-field inhalation exposure?   The direct method is to monitor the breathing zone of persons while the source is present or while they are engaged in activity that will generate a near-field source.   For example, applying hair-spray might be considered a fairly intense acute near-field exposure event to the chemicals present within the sprayed product.    Other relatively intense near-field sources include any activity that releases a toxicant and that is done at “arm’s length” or at “tool’s length” such as digging, mopping or sweeping.    Measuring and timing these acute exposures will allow one to reasonably characterize them.

Another form of near-field exposure in this context are the relatively constant emissions and exposures that may be coming from indoor sources like building products.   Here “capturing” the time or spatial  element of the resulting concentrations may not as critical especially if the sources are “spread-out” like off-gassing paint or carpeting.

Near-field exposures indoors can also be modeled but here the terminology can get a bit confusing.   If one considers the entire volume of the residence as the near-field then one does not have to use a “near-field” exposure model which sizes the near-field volume as the conceptual geometric space (often a sphere hemisphere or a cylinder) which includes the source and the nose and mouth of the exposed person.   If that volume includes the entire house or entire room within a house and the source is “spread-out” as mention above, then a well-mixed box model could be a good choice.   If, however, the source and the person are only in a fraction of the room's volume, then one would use the 2 zone model with the near-field including the volume containing the source and the breathing zone.  The far-field would include the rest of the room’s volume.

It is important to remember what Lance Wallace taught us; namely, that most human inhalation exposures occur in the near-field.   Of course, dermal exposure, almost by definition, occurs within the near-field.

As usual I would love to hear of your thoughts about and experience with the near-field exposure assessment especially compared to what might be happening or not happening in the far-field.






Sunday, March 8, 2015

Estimating Airborne Particulate Exposure from Overspray


We use a lot of products that are sprayed.   Everything from hair spray to furniture spray to window cleaner are spray-applied either from a pressurized can or a trigger spray.   The manufacturers of these items want ALL of the product to go from the container to the target.   It most cases, it mostly does; however, not all of the material goes to the target, some remains in the air as an aerosol of particulates.  This remaining airborne fraction has been called “overspray” or in some cases, "bounce back".    Whatever it is called this airborne fraction is a potential health concern because the vast majority of spraying occurs in what we call the “near field”.   This is the volume of air that includes the spraying product and the breathing zone (nose and mouth) of the person doing the spraying.   The smaller the near field is, the higher is the potential for significant concentrations and exposure.   Since most spraying occurs at literally at arm’s length, with some products like hair-spray at considerably closer distances to the breathing zone, the possibility for at least acute inhalation exposure exists.

In doing a risk assessment for these types of products, manufacturers have historically depended on some cursory data and “rule of thumb” estimates for the amount of overspray that they would use for their analysis.   A few years ago, one of my clients at Procter and Gamble asked if we could do a better job of quantitatively estimating overspray.   That effort ultimately culminated in a paper published in 2012.    I would be happy to send this paper to anyone requesting it at mjayjock@gmail.com.

What I want to do in the remainder of this post it to provide you with the primary learning from that work:

The predominate mechanism for particle overspray from sprayed products is the failure of relatively small particles in the sprayed stream to impact the surface because of their tendency to remain in the flow lines of the air stream.   The figure below illustrates this effect which, by the way, is the primary mechanism used by particle size impactors like the Anderson Impactor:




Thus, even when sprayed at 90 degrees to the surface, some of the smaller particles will following the streamlines and escape capture.   As such, the term “overspray” is a bit of a misnomer. In this case, everything is being sprayed directly at the target and nothing is being sprayed over the target.    Another term used to describe this loss, “bounce back” is also a misnomer since studies and data indicate there is essentially no bounce associated with wet aerosol particles sprayed against a solid surface, all the wet particles that hit the surface stick.  

Other conclusions include:

1. Small aerosol particles (less than 15 μm MMAD) make up the vast majority of measured airborne overspray from sprayed products.
2. Larger wet particles will have a much stronger tendency to impact and stick to the receiving surface. Relatively large particles that are indeed “oversprayed” past the target and do not stick tend to quickly settle out of the air.
3. As a worst case, wet particles less than or equal to 30 μm could rapidly evaporate to respirable size in a time frame relevant to the exposure event. This size range should be considered in estimating the potential respirable mass.
4. Also, almost all particles greater than 30 μm MMAD that remain airborne after spraying will settle 200 cm (2 meters) downward and be on the floor within 1 min. Thus, between impaction loss and settling, few particles are left to become or remain airborne in a size much above 30 μm for any time 1 min after spraying.
5. Any future work on evaluating this exposure/risk should focus on respirable and near thoracic particle sizes, that is, particles with an ultimate aerodynamic diameter below 15 μm.
6. Any reduction in the mass of the low-end particle size distribution tail or bimodal “hump” from spray products will directly and significantly reduce any overspray potential.
7. Overspray potential from sprayed products is best estimated using a real-time laser particle sizer. A rough average from available data would indicate a reasonable worst-case respirable overspray potential of 5% of the emitted mass and a worst-case total aerosol overspray potential of 6% for trigger sprayed products.
8. Theoretical considerations indicate that hard surfaces (e.g., metal or glass) should react in a manner similar to soft surfaces, such as cloth with a low pile. That is, they should produce essentially the same amount and type of airborne overspray from sprayed liquid aerosol.
9. The spreadsheet model developed for this work should be useful for estimating the amount of potential overspray based on particle size distribution of the spray. This model will overestimate the overspray potential for thick pile targets such as hair and carpet.

As usual, I welcome hearing about any of your thoughts or any experience you have in this matter.







Sunday, March 1, 2015

Short Term or Bolus Exposure Limits

The question of the proper assignment of a short term or bolus exposure limits is not easy and has not received a lot of attention.  Most of us know about 8 hour and 15 minute exposure limits for occupational settings.  Some of us have even worked with instantaneous or Ceiling limits for the few chemicals for which they are assigned.  But the general question of a short term (a few seconds to say 60 minutes) exposure limits has not been heavily explored.   The question came up in last week’s blog (and offered paper) in which a spill generated a quick peak airborne and potential breathing zone concentration profile that begged for a characterization of an exposure limit relative to this peak exposure.   We deferred to the ACGIH Excursion rule.

Enter Andrew Anthony “Tony” Havics, CIH, PE (www.ph2LLC.com) who has thought about this quite a bit over the years.   Tony responded to last week's blog with the following thoughtful and, I think, very useful technical perspective which I have excerpted below:   

“The question of what concentration to use as a limit should probably be a post in itself. The ACGIH TLV concept of an excursion limit has changed over the years. The WEEL Committee currently uses the same concept as the TLV Committee. It is supported by statistics and empirical data in that if one considers a maximum 15 minute period average given an 8-hour OEL, Leidel has shown that the ratio of the 95th percentile to the mean for a GSD of 2.5 is 2.97. Since a reasonably controlled (by IH standards) workplace SEG would have a GSD of less than or = 3, the concept of 3xOEL is not a bad application. 

But one should consider whether there is a way of creating a short-term limit for a situation that has more empirical tox or biological basis. One way is to consider the endpoint. Is it an irritant with acute response? If so then the acceptable limit should not increase, e.g., OEL30min = (1)*OEL480min. Is a carcinogen? If so, then one has likely assumed a linear model of dose-response that should be constant, meaning that the acceptable limit follows Haber’s rule (actually Fluery’s rule) in that C*t = K, so OEL30min =(16)*OEL480min. Others would be somewhere in between. ten Berge proposed that (C^n)*t = K [n = 1 for carcinogens by my description]. I would suggest considering (C^n)*(t^a) = K [n = 1 & a=1 for carcinogens and n = 1, a = 0 for irritants]. Using ten Berge’s equation, one can evaluate a set of chemicals to see what the range of the variable n is typically. I evaluated 256 studies for the endpoint of death on 15 chemicals a while ago (Havics, unpublished, 2005) and found, like ten Berge, that almost all were from n= 1 to n = 3. Assuming that the model applies to other endpoints, one can estimate an equivalent OEL at 30 minutes given an OEL of say 100 ppm for 480 minutes. For n = 1, 1.5, 2, 2.5, and 3, OEL30min = 1600, 635, 400, 303, and 252 ppm, respectively; for n = 1, 1.5, 2, 2.5, and 3, OEL15min = 3200, 1008, 693, 470, and 317 ppm, respectively. So, from a conservative standpoint, the use of 3xOEL, or 300 ppm in the cases you describe seems very reasonable…” 

As a supplement, Tony sent me the following curve which graphically describes the above relationships from 480 minutes down to 7.5 minutes.  



I think the above general recommendation of n =1 or carcinogens and n = 0 for irritants is reasonably conservative given a relative lack of toxicological or biological data.  Of course, as Tony points out, this recommendation may not adequately address the needs of stakeholders assessing chemicals that are not carcinogens but could have fast-acting acute effects that might overwhelm the body's defense by bolus exposures. Here, my sense is that we (as usual) would need more data.

Using the above curves for setting exposure limits, a critical line (not on this curve but described by Tony above) is for n = 0 which would be flat at 100 ppm all the way across the curve to zero time.  This flat limit, should be considered as the exposure limit line one could use for any chemical that might cause a severe acute toxicological response where better data do not exist.   It is a conservative (i.e., overestimating) approach for any reasonably well documented 8 hour exposure limit and any chemical in which the real n is not equal to 0.

Clearly, chemicals that cause local tissue irritation, contact site toxicity and serious systemic toxicity could be lethal via acute inhalation of high concentrations even for short periods of time.   For these chemicals, those setting exposure limits really need to understand the characteristics (i.e., value of n) of the dose-response curve as a function of concentration and time of exposure. Indeed, n = 1 would not be conservative enough in many cases (see above).  Using n = 0 is, however, a reasonable default position if you have an 8 hr OEL but do not have the acute data. 
In line with these realities, Tony called to remind me that we should always be aware of the inherent uncertainty that resides in both our estimates of exposure and, in the case above, in our exposure limits.   In recent years we seem to be doing a better job of describing the uncertainty around exposure (see recent blog on MCS) but have essentially neglected this aspect on the toxicological side of risk.

The fact that these limits and all exposure limit values are not bright lines notwithstanding, we have historically assigned and used them as deterministic values.  I believe that just having the awareness that they are not bright-lines is an important first step.   Ultimately we need to quantitatively describe and control this uncertainty to assure the integrity of our assessments.

I am indebted to Tony for furthering this discussion and I would love to hear from other readers of this blog as to how we might handle do a better job of providing exposure limits for bolus exposure.