LinkedIn

Wednesday, June 26, 2013

Every Industrial Hygienist should be a Modeler

If you are a practicing Industrial Hygienist (IH) you are a Risk Assessor.   You compare measured or estimated exposures to exposure limits and this activity characterizes or assesses the risk.   You typically do not develop exposure limits but you do estimate, measure other otherwise gauge the level of actual exposure to the agent in question.   We do this mostly with inhalation exposure.  When we actually measure and we do it by measuring the concentration of the agent in the potential breathing zone of the worker and, as mentioned before, comparing it to an occupational exposure limit (OEL) with the same units of concentration and the same time frame.  Thus the stock-in-trade of the IH is exposure estimation to be used in the context of risk assessment .

It has been well shown that the vast majority of occupational exposures are NOT measured but they are estimated using EXPERT JUDGEMENT.   Clearly the industrial hygienists are running some algorithm (that is a MODEL) subconsciously  in their mind that either concludes that the risk is insignificant or in need or further evaluation.   Explaining “expert judgment” has always proven to be very difficult. Some are better at invoking this hidden process than others. Indeed, the more experience you have as an industrial hygienist, the more you test or validate your subconscious models versus reality and the more skilled you become.


My point is that as tool-using animals, we are much better off if, as industrial hygienists, we raise the models we use to the level of consciousness so that we might be able to enjoy the following advantages over the subliminal approach. The critical advantages of separate and explicit exposure models include their inherent ability to be
  • Examined by the modeler and others in a critical review of their work or use
  • Explained to folks with a stake in the prediction and outcome 
  • Applied retrospectively (estimating past exposures)
  • Shared and passed on as technology transfer (education of the next generation)
  • Tested (validated) and improved using the scientific method  
When you learn and use exposure models you identify yourself as a technologist, someone who uses science to answer questions.  You enhance your standing with your employer and your charges.  You distance yourself from the characterization as a "Pump Jockey".   From a professional development perspective, becoming a modeler is well worth the effort.

Learning and using models is not simple but it is not impossible either.  I have found it is best to progress with  baby steps.  Start with relatively simply models, which I have found to be remarkably useful, and then progress to more sophisticated and detailed tools which are even more valuable.   There are lots of free software tools out their and some good educational material.   Depending on the response I get to this blog I may dedicate the next one or two to being more specific about these opportunities and resources.


Wednesday, June 12, 2013

Describing Uncertainty

Uncertainty is everywhere.   Indeed, there is not a thing or quantity that we can measure that does not have some level of uncertainty.  For example, the length of any "standard" length object for measuring 1 yard (i.e., a yard stick) is not exactly 1 yard (except by definition) - there is always some finite  tolerance + or - (however small) around the 1 yard mark.  We cannot be completely certain about any measurement.   Measurement tolerance is one form of uncertainty.  Quantities that vary naturally are another form.  For example, the weight of all adult females in Pennsylvania will have some variation and therefore the weight of any PA female adult with fall within a range of uncertainty.  Indeed, any individual will have weight that varies over time.  The other source of uncertainty, which is typically the most important and dominant, in risk assessment is a lack of information associated with the value of interest.   I am going to use the weight of my dog Libby in an attempt to show the interaction between natural variation and lack of knowledge.   In this whimsical example let us assume that Libby's weight is an important value in a risk assessment and if we overestimate her weight we overestimate the risk.   Similarly, if we underestimate her weight we underestimate the risk.   How much does Libby weigh? 
When I pose this question to students they usually ask me to identify Libby's breed.   I tell them that I will only disclose to them that Libby is a pure bred dog and is fully grown.   I tell them that they all have some information about the universe of dog weights on Earth.  At that point, the students usually present a range of weights to represent Libby, for example 5-200 lbs.  This implies that there is no chance Libby could weigh less than 5 or more than 200 lbs. (Worst case estimated weight as risk surrogate = 200 lbs).  I then tell them that Libby is an English Springer Spaniel.   Those with online-capable phones might find that female Springer's typically weigh between 35 and 45 lbs.   The students could choose 35-45 lbs as the new range of estimated weights, one born of more information but some will remember the condition that if we specifically underestimate Libby's weight we underestimate the risk.   These wise students then may (and have) come up with an estimated range of 35 to 70 lbs.  The next step might be to go to my home and look into the window of our garage (where Libby is) to see her.   Such an examination will disclose an overweight dog perhaps as much as 20-30 lbs overweight.  The new estimated range from these observations might be 55-70 lbs.  The final step might be to go into my garage and weigh Libby daily over the period of a month.  Here the data might show her to weight to vary between 62-64 lbs. This range is now a much better data-based estimate of lowest to highest which reflects the natural variation of her weight while all the previous estimates were plagued by a greater lack of knowledge.   Forced to make a deterministic (single value) estimate of her weight one might use 65 lbs to guard against getting her on a day when she ate or retained somewhat more than normal.   
The point here is that we could ALWAYS provide some useful quantitative estimate of Libby's weight and the uncertainty around it even when the available data was meager.  The estimate got better (more useful) with more data.
Just to stretch this analogy even further.   What if I said I have an animal in my garage and I will not tell you what species of animal but you need to estimate its weight as proportional to risk!   Here the uncertainty born of ignorance is MUCH greater and the 40 fold range of estimated weight above.  Indeed, given such a large range one might rightly question the utility of the estimate but the point remains that the uncertainty at any stage of the assessment should be estimated and disclosed.  Such uncertainly analysis shows where we might get the best bang for our data buck - in this case knowing what animal species is in the garage. That specific information would then allow us to make much more narrow and a more useful estimation of the range of weight as a surrogate for risk. 

Wednesday, June 5, 2013

Uncertainty is the Bane of the Risk Assessment Process

Almost 20 years ago I had the privilege of presenting testimony before the President’s Commission on Risk Assessment.  I used that opportunity to describe and bemoan the relative nascent state-of-the-science and how uncertainties in both the toxicology and exposure assessment of risk assessment were really limiting the utility and ultimately the credibility of the process.   A measure of our progress might be seen in fact that the title of that talk and the title of this week’s blog are identical.   That talk from almost 20 years ago goes into the subject in a lot more detail.   If any of you reading this ask me for it I will send you the PowerPoint slides from that presentation (mjayjock@gmail.com).

This is not to say that no progress have been made; indeed, a lot has happened in twenty years on the exposure side of things.   Some really great tools have been developed by the volunteers within the American Industrial Hygiene Association which have been made generally available as books and freeware.    Go to:
to get some of these models and go to the AIHA Press:  https://webportal.aiha.org/Purchase/SearchCatalog.aspx
to see the books on Exposure Assessment Strategies, Risk Assessment Principles and Mathematical Modeling.   Even with these tools a ton of work still needs to happen on the exposure side of risk to reduce the uncertainty; however, I must say that we have come a long way.

From my perspective, even more work needs to happen in toxicology.   Clearly, advanced techniques have been developed but from my perspective, available data that provides confident information on what is happening in human tissue at environmentally relevant concentrations of chemical exposure has not happened.   Most of the current occupational exposure limits (OELs) are based on the current toxicity base which uses the responses of animals at high doses in order to estimate relatively “safe” or allowable levels of exposure to humans at relatively low exposure.     See previous blogs for a more detailed discussion of this subject.

So what might be done with all this uncertainty?   Is the risk assessment process so laden with uncertainty as to be worthless?  I think not.  Indeed, at almost any level, a rational assessment of the health risk posed by chemical exposure is better than not doing it.   For me, the challenge is appropriately describing that uncertainty for the various stakeholders in the process.    Indeed, the suitable description of uncertainty would hopefully point the way to its reduction.   

Properly handed using the upper end of the uncertainty – for example, the 95th upper bound percentile -as the measure of risk will reasonably bias the analysis toward overestimating and not underestimating the risk.   Thus, if one is able to shrink the range of uncertainty (with more data) the 95th percentile upper bound will come down as well which means we are properly trading data for conservatism in the spirit of a precautionary approach.

For many years I used Monte Carlo uncertainty analysis for the assessment of risk which allows for everyone to see the specific uncertainty that exists and feeds the various parts of the assessment.   It gives the stakeholders a good view of where the uncertainties exist and how additional information might truly help the evaluation and the subsequent decisions.   This and a simpler form of uncertainty analysis will the subject of a future blog.

Given the level of uncertainty in our current set of OELs, my suggestion is that we quantitative describe the uncertainty within these values as part of the documentation of any OEL.    Doing so will typically reveal a relatively large range of uncertainty which some may find disconcerting; however, I believe that disclosing it is a critical part of the integrity of the process.   More about this critical issue of documenting OELs will happen in a future blog as well.