LinkedIn

Sunday, May 24, 2015

Risk of Eating (or NOT Eating) Fish

If your mother was like mine, eating fish was a good thing.   She called it “brain food” and all we kids wanted to be smart so we would eat fish even if we did not like it.

Things have changed; eating some types of fish can presumably put your brain at risk.  Mercury, a neurotoxin contaminant in fish, comes to mind (no pun intended).   A 2004 FDA web site (still viewable online):  http://www.fda.gov/food/resourcesforyou/consumers/ucm110591.htm  advises us to NOT eat:

·         Shark
·         Swordfish
·         King Mackerel
·         Tilefish

This same web site advises to only eat limited amounts of tuna.

Predatory fish are relatively high on the food chain and toxic contaminants tend to bio-accumulate up the food chain such that these higher-ranked species have higher levels of mercury.   Of course, we all know who is at the top of the food chain!

Then there is the additional risk of cancer, again from eating fish that are relatively high on the food chain for the same reason; that is, accumulation of potentially cancer causing toxicants in the fish.
So what is a person to do?  If you hate fish, it is good excuse for avoiding it but that avoidance is not rational. 
I found an excellent web site that puts a lot of this into context:  http://www.aicr.org/enews/2015/04-april/enews-fish-and-cancer.html
I found the following quote from this site:
“Again and again, research shows that people eating diets with a moderate amount of seafood have lower risk of cancer and other chronic diseases and longer lives.”

It goes into some detail as to why this is the case but it makes the excellent general point:  Almost all risk as attended with some benefit.    Often the benefit clearly outweighs the risk.

Black and white thinking is generally not useful and this is particularly so in the realm of risk assessment and risk management.   If exposure and risk to unacceptable levels of a substance is occurring then clearly some risk management action needs to occur to reduce or eliminate that exposure.  If it is not practical to reduce the exposure then exposure to that substance should be eliminated.   Elimination should only happen when there is no other reasonable alternative. 

We all need to remember that the risk of death in this life for everyone is 10or ONE.  Something is going to get each and every one of us.  Our job as rational beings is to pick and choose what risks we are willing to accept along with the benefits they provide.   That means doing the best job we can at risk assessment.


In conclusion, eat your fish!

Sunday, May 17, 2015

Is Human Health Risk Assessment the Best Tool to Make Decisions?

Human health risk assessment has a bad reputation in some minds. Dr. Peter Montague is an intelligent and articulate thinker, he also pretty much disparages Human Health Risk Assessment, especially the quantitative aspects of it.    The following quote from his writing more than a few years ago struck me the hardest, as I believe it was designed to do:
“Risk assessors are now in the position of the conductors and engineers who kept the trains running on time to the death camps in Nazi Germany to minimize discomfort to their passengers -- they are just doing a job, honorably and to the best of their ability, but the final result of every professional risk assessor's work is the destruction of the natural environment, one decision at a time, and the relentless spread of sickness throughout the human and wildlife populations.”  
If you want to check the entire article to determine the complete context of this quote it is available online at:   http://www.precaution.org/lib/rehn519.19961106.pdf
In another online essay Dr. Montague asserts:
“Risk assessment is one way of making decisions, but it is not the only way, and it is not the best way.[1] Furthermore, risk assessment as usually practised is unethical.” 
The online reference for this complete opinion piece is: 
It would appear that Dr. Montague believes that we ought to abandon risk assessment in favor of what has been known as the “precautionary principle”.   My read of this principle in its more drastic manifestation suggests that chemicals should be banned (without the benefit of rational risk assessment) if they pose a potentially substantial but relatively uncertain probability of harm from their use and subsequent exposure.
A remarkably thoughtful and articulate defense of risk assessment was put forth by my friend and colleague, Dr. Adam Finkel.    In a published debate with Peter Montague, Adam provides all the reasons I might need to continue to view human health risk assessment as the best way forward.  Please forgive my open admiration for this remarkably powerful and intellectual argument in which Adam defends the rational scientific framework provided by risk assessment while admitting that we have a lot of work to do.   The online reference for Adam’s complete treatise is:  http://www.precaution.org/lib/07/prn_dhn_finkel_response.070807.htm   I urge you to download it.
I am reminded in all of this of Winston Churchill’s remarks in 1947: 
     "Democracy is the worst form of government, except for all               those other forms that have been tried from time to time."   
This is how I see risk assessment as a process.
As I have written here previously, uncertainty is the bane of the risk assessment process.  Large error bands (displayed or hidden) in our analyses drive controversy depending on which side of the political or ideological fence you reside.  We have demonstrated that we are fully capable of both over and under-regulating chemical exposures presumably based on risk assessment.   The real task is to shrink those error bands to provide more confident knowledge to feed rational decision.  That means committing the resources to develop the science. 
I believe that it FIRST means that we should acknowledge the size of the error bands while informing the best decisions we can with the always imperfect information in hand.
In my opinion, the authorities who set exposure limits are afraid to admit the current level of uncertainty that exists today.  To date, they have summarily dismissed all calls to do quantitative analysis of exposure limits for non-carcinogens, saying that it is either not possible or meaningless. 
I believe that this condition of not fully disclosing uncertainty also exists, but to a substantially lesser degree, in the realm of exposure assessment.
Unless or until we face up to quantitatively disclosing these uncertainties of our risk assessments to the various stakeholders, I believe that progress will continue to be slow and risk assessment will remain a legitimate target of criticism from both the left and the right.   
In any event, I believe that risk assessment will remain the "worst" tool for making decisions about the risks of chemical exposure to human health, except for all the other approaches.



  




  


Monday, May 11, 2015

The Rule-of-10 in Occupational Exposure to VOCs

Some individuals are blessed with a rare combination of technical and managerial acumen along with a gift for seeing the big picture.  In the realm of Industrial Hygiene I can think of no one who personifies  these traits better than Mark Stenzl.   During Mark’s long and very productive career he developed the “Rule-of-10” for inhalation exposure to volatile organic substances.  The rule is presented below:


Level of Control
Fraction of Saturation Vapor Concentration (SVC)
Confined Space – Virtually no circulation
1/10th of Saturation
Poor – Limited Circulation
1/ 100th of Saturation
Good – General
  ~ 6 air turnovers/hr.
1/1,000th of Saturation
Capture
1/10,000th of Saturation
Containment
1/100,000th of Saturation

You may remember that:
Saturation Concentration (ppm v/v) = 
(Vapor Pressure/Atmospheric Pressure)(1,000,000)

Below are some comments Mark sent to me when I asked him if I could write a blog about it.  After he said it was OK to write about it, he provided the following background:

“I have been using the Rule-of-10 for at least 35 years.  It was a basic component of our qualitative exposure assessment process at both Celanese and later at OxyChem.  We sent every exposure scenario through this qualitative process and predicted the likely exposure category (similar to AIHA’s except we had an extra category 0.1 to 0.25 times the OEL and then 0.25 to 0.5 time the OEL). We then established our sample strategy based on this assignment.  If we thought that exposures were above 0.5 times the OEL we collected at least 5 samples evenly over the year.  If our predicted exposure was between 0.25 and 0.50 we collected at least 4 samples and if the predicted exposures were between 0.1 and 0.25 times the OEL we collected 3 samples and for all exposures thought to be less than 0.1 times the OEL, we would randomly pick out 10 SEGs [Similar Exposure Groups] and collect 3 samples.  Usually, on an annual basis, we would analyze all of our data and prepare an interpretive statement for each facility. Part of the analysis was to evaluate how well our qualitative assessment performed.  We did consider several other determinants of exposure in our algorithm beyond the Rule- of-10.  They included the observed level of control, frequency and duration of the exposure scenario, the hazard vapor ratio and the particulate hazard index.  I found that the Rule-of-10 to be amazingly accurate considering how simple it is. I have found the rule to be very beneficial in other applications such as the chemical approval process related to the introduction of new chemicals into the workplace or the change of existing processes; in auditing IH programs; in conducting due diligence related to potential acquisitions; and in inspections.” 

Further information on the implementation of Mark’s Rule-of-10 along with a check-list approach, designed to provide a more detailed analysis, developed by Susan Arnold will be included in the upcoming 4th Edition of the AIHA Exposure Strategies Book.   My sense is that these additions alone will be worth the price of this basic reference that every Industrial Hygienist should have.

In another note to me, Mark makes the following points: 

“In the examples of the application of the Rule-of-10 [in the 4th Edition] …, the IH may only be in the workplace a few minutes and likely does not have access to a “good” basic characterization.  Even in these cases, if the IH only knows what chemicals are present (and approximate composition), have an OEL or at least an estimate of an OEL and know the vapor pressure (information that is readily available) they can use the Rule of 10 as a screen.  That is, they know what type of controls and configurations they should be observing and match that up with what they are observing.  So the end point is not to classify the exposure into the correct exposure category, but rather to use it as a tool to raise “red flags”; where they should ask more questions and where they need to do a more formal exposure assessment.”

The 4th Edition of the Exposure Strategies Book should be out this year.  I heartily recommend you get it.

  


Sunday, May 3, 2015

OELs: The “HOT” Topic at this year’s AIhce in Salt Lake City

Those of you who read this blog regularly know that I am very interested in exposure limits, especially Occupational Exposure Limits (OELs).   They represent literally half of the risk assessment process done by most Industrial Hygienists when they compare the exposures they measure (EXP) to the OEL in the classic hazard index (HI) = EXP/OEL.     HI >1  L  HI < 1  J

As such, OELs are critical to the IH profession.
The histories of OELs as a whole is that they have almost invariably have come DOWN in value with time.  Indeed, I can only think of a single example where an OEL went UP.   That fact indicates to me that we have not done a particularly good job of understanding and explaining the uncertainty associated with our OELs.

Understanding the uncertainties associated with OELs and communicating the nature and size of the uncertainty as a part of the documentation has been a point I have been trying to make for some time.

In June I get to present some of these ideas again at the 2015 American Industrial Hygiene Conference & Exposition (AIHce) in Salt Lake City May 30 – June 4, 2015.


On Sunday, starting a 8am, I get to teach a Professional Development course with my friend and colleague Dr. Andy Maier (University of Cincinnati) and Dr. Dan Arieta (Chevron Phillips).   These folks are are a rare breed in the ranks of Industrial Hygiene;  namely, they are board certified toxicologists.  For my part,  I am less expert in the realm of toxicology by very enthusiastic about the subject as it relates to OELs.   The title of the PDC is:  The Hierarchy of Occupational Exposure Limits (OELs): A Suite of Tools for Assessing Worker Health Risk  (PDC402).    Some discussion topics with the presenters in this PDC are shown below:
Why the OEL Hierarchy Concept
Maier
Concepts of Toxicology for OEL users
Arieta
The WEELs Keep Turning -  Setting Traditional OELs
Maier
To Be a Savvy OEL User – OEL Limitations
Jayjock
Activity - The Great OEL Debate
All
REACH DNELs
Arieta
OEBs and other hazard-based techniques
Maier
Risk-Based OELs
Jayjock

On  Monday morning I get to talk about Risk Based OELs and the Risk@OEL during a Science Symposium Session entitled Occupational Exposure Banding (OEBs): The Solution for the Glacial Pace of OELs  from 10:30 am to 12:50 am.   The title of my talk will be:  Risk@OEL: An Approach to Conduct Tier 3 OEBs

My last presentation is Roundtable RT236 "Toxicological Challenges to the Derivation and Application of Occupational Exposure Limits (OELs)".   It will happen on Wednesday afternoon from 1 to 4:30pm.  I will be talking about  Aligning Occupational Exposure Limits (OELs) with Exposures, especially as they relate to bolus exposures.


I will write about the bullet points of these talks and offer the slides from them in future blogs.   In the meantime,  I am looking forward to meeting some of you at these events and hearing from you about what future topics you want to see in this blog and about what aspects of OELs are important to you.     

Sunday, April 26, 2015

Focusing on Residential Air Monitoring

I am often asked to review data on environmental exposure to a specific chemical as part of a human health risk assessment for the general population to that chemical.   This usually means reviewing the available scientific literature for the existence of the chemical in various medium including ambient and indoor air, water, environmental surfaces, soil and indoor dust.   The basic idea is to determine where the chemical might be showing up and whether the concentrations being measured might present a risk to people.  The premise is fairly simple.  It is assumed that people:
  • Breath air
  • Touch surfaces with various body parts
  • Have hand-to-mouth activity that results in ingested dirt, dust and their contaminants

The literature search includes where and in what products the chemical is used which provides some insight into where and how exposures might occur.

The heart of the search is typically monitoring studies done on the above mentioned media in homes and other occupied spaces.   Since we spend most of our time indoors, that part of the process makes sense to me.   We are then presented with data on the measured concentrations in these media.  The data are usually highly variable with a large range of concentrations and here is where the difficulty begins for me. 

If the media sampled were taken at random then how do I determine if enough samples were taken to present a reasonably complete picture?   Imagine, if you will, a situation in which only 1% of the US population is being overexposed in their homes to Compound X.    That represents over 3 million people.   The binomial theorem advises us how big a sample one would need to take to see a 1% response.   If any of my readers would like me to go through the math I can do so in a future blog.  It is a valuable math tool and I would be happy to do so.   If, however, you are willing to trust that I did it properly, I can tell you that taking 100 samples (i.e., sampling 100 homes) at random has a 37% chance (that is, greater than 1 in 3 probability) of NOT seeing the 1 house in 100 that is being overexposed to the chemical.  
  
So how many houses have to be sampled at random to find the 1 in 100 with a confidence level of 95%  of seeing it or only a 5% chance of NOT seeing it?   That answer is a sample of 300 houses.   These are not whimsical predictions, the binomial theorem is as real as it gets.  It is why neither insurance companies will lose money on their policies nor will casinos lose on their games of chance.

I know of few monitoring studies that have gone into more than a few score of homes and none that have sampled in 300.

What is the lesson in all this?

There are two big lessons in this for me.  One, never discount a HIGH value unless you have real evidence that it truly is an artifactual outlier.   

The second lesson, is that we need to do a better job of understanding the potential sources of the chemical then forgo at least some random sampling but go directly to and sample houses with these sources. 

If the concern is acute exposure then the sampling needs to happen proximate to the initiation of a fresh source with potentially waning strength.  

If the concern is the migration of a semi-volatile organic compound (SVOC) out of a plastic or foam matrix and into dust, then be sure to give the source enough time to reach equilibrium before sampling, perhaps as long as a few years.

All of the above monitoring could be helped and informed by laboratory experiments and modeling but you heard that song from me before.


I would love to hear your comments on these issues and any experience you might have with them. 

Sunday, April 19, 2015

Models and Monitoring are NOT Enemy Camps

A recent blog here asserting that modeling can be more accurate than monitoring may have, as a result of its title, unfortunately enhanced the old notion that modeling and monitoring are at odds with one another.   The blog was written because many consider that monitoring is the “gold standard” and that monitoring will never be accepted as a reasonable substitute for this “proper” characterization of exposure. The truth is that modeling alone, absent field or experimental work to monitor exposure scenarios, to implement, evaluate and refine the models, is a relatively anemic activity.

It is true than one can use “first principles” related to known physical properties of the materials along with accounting procedures that keep track of how much substance might be going into and out of any volume of air but these are all dependent on what I call sub-models.  We need to understand such critical "monitored" realities as:

  •   How the air is moving relative to its velocity and volumetric  rate
  •   The characteristics of the emitting source:
    •  how big is it?
    • is it a point or an area?
    •  the rate of emission as a function of time
    • competing sources within the scenario

All of these require at least some level of experimentation, data gathering (i.e., monitoring) to properly implement the model.  After this phase, the model output needs to be evaluated with the MONITORING of the exposure potential.  If the model got it essentially right, the monitoring will show this.  If not, then the model builders should gain some insight from the monitoring results as to how to improve the model.   It is clearly an iterative process where the monitoring continually shows the model builders where the model needs improving.

Once the model is developed, however, it should really help to inform the monitoring practitioner as to where he or she needs to monitor and, more important, where they do NOT need to monitor. The typical Industrial Hygienist (IH) in an industrial facility is often faced with perhaps hundreds or at least scores of “monitoring opportunities”.   These are scenarios that might result in significant exposure to workers.    Given the practical limitations of available resources, he or she will simply not be able to monitor everything everywhere.  Normally, the IH in this situation applies “expert judgement” to eliminate and exempt the majority of scenarios of undergoing monitoring.   Indeed, John Mulhausen has made the critical point that the typical number of exposure samples taken relative to exposure scenarios is ZERO.
  
So how does an IH, who only does monitoring, decide where to monitor?   Well, some scenarios are obvious when at least one of the following factors are present:

  • The workers are showing symptoms of overexposure
  • The chemical is highly toxic (low OEL)
  • The process
    •  Is fast (producing relatively high levels of product and  airborne contaminant)
    •  Consists of a considerable amount of volatile or dusty  material
    •  Is relatively open or “leaky”
    •  occurs at elevated temperature
Whether they realize it or not, I believe that many, if not most, IH practitioners in this situation are applying their own personal "experience model" to estimate whether the ratio of potential exposure to the exposure limit for the chemical is significant. If this subliminal model tells them that the ratio can be close to or greater than one then they typically move forward to monitor the scenario. 

What my colleagues and I have been asking for quite a few years now is: Why run a subliminal model when they can use explicit mathematical models with all of their advantages to inform these decisions? 

The bottom line is that modeling and monitoring are not separate camps but really are inextricably connected and feed each other within the process.



·          

Tuesday, April 14, 2015

ERATA Regarding the last post: Drs. Crump and Berman were Contractors for the EPA

Below in an email from Dr. Frank Mirer correcting my mistaken impression that Drs. Crump and Berman were contractors for clients with commercial interests in asbestos when, in fact, they were hired as contractors by the EPA.    My apology for the misinformation.

Hi
Regarding the asbestos piece. First, thanks for noticing it and your thoughtful comments.

Second, more important, when I wrote that Kenny Crump was a “contractor,” I had intended to convey that he was an EPA contractor, not writing a study for management. Can you correct this? This support is disclosed in one of the papers (although the authors do disclose support from a management group as well). In recent years, EPA has commissioned scientific documents of this type which were subsequently published in peer reviewed journals. Kenny Crump co-authored a commentary on formaldehyde for EPA which was helpful to the precautionary side, and which was (in my opinion) validated by recent new knowledge. In the asbestos case, increased risk estimates for amphiboles would support lower tolerances for Libby asbestos, which would be precautionary for Libby and which was EPA’s main concern.
[Can you include the above in your next post, and, if you have an email, forward it to Kenny. Thanks.]