LinkedIn

Monday, December 9, 2013

Risk Assessment Uncertainty or How Safe is Safe? Part1 Exposure

In the last blog I discussed the client’s expectation that the risk assessments we do represent our professional certification of the relative safety of any scenario under consideration.   Of course, the thoughtful reader will then question:  What is safe?  
The above assumes that the risk assessment will end with a “happy face”.   That is, that the scenario is deemed in the report to be relatively safe.   The reality is that I have rarely written an assessment that was not so.   Most clients do not want a determination of significant or unacceptable risk documented.   Typically, if the client has committed to doing a risk assessment then they are committed to either refining the assessment (with additional testing and data) to the point of allowing a conclusion of safety (see previous blog) or applying risk management options that choke down the exposure and reduce the risk to acceptable (or at least not unacceptable) levels. 

Again we are at essentially the same question:  What is safe or at least not unacceptably risky?

One answer to that question is that a “safe” exposure is an exposure that does not exceed the exposure limit.   For the purpose of this blog we will assume that the exposure limit is a “bright line” that defines a safe exposure and then look at it from the exposure end of things.    The factors that make up exposure are not constant and indeed they are quite variable.  In fact, if you look at monitoring data for the same person doing the same job, the spread in values is quite large and is often described as a lognormal distribution with a geometric standard deviation (GSD) of 2 or greater.   When we have a GSD of 2, it means that the ratio of the 84th percentile/50th percentile of this distribution and the 50th %tile/16th %tile is equal to 2.     Thus, the 84th%tile/16%tile is 4 fold.  That still leaves 32% of the exposures either less than 1/2th or greater than 2x of the median exposure.   As practical example, a measured distribution with a median exposure of 100 and a GSD of 2 will have 16% of its values below 50 and 16% above 200.     If the exposure limit is 200 then 16% of the time the exposure limit will be exceeded by the exposure.

Considering such statistics, many in our profession consider an exposure “safe” or at least in compliance if it does not exceed the exposure limit greater than 5% of the time.   Thus a median exposure of 100 with a GSD 2 would not be considered “safe” given an exposure limit of 200.   The median measured exposure would have to be significantly lower than 100 assuming the GSD remains at 2.   

The above is an ideal case, when we have a lot of data and can accurately estimate the actual distribution of exposures. 
Consider what most often is the case.  We take a few samples and if they are below the exposure limit some of us might often declare the situation safe.    For the above example, it should be obvious that we should do some statistical analysis on the samples we take.  IH STAT was designed to do just that. This important tool for evaluating our monitored data is available at:
http://www.aiha.org/get-involved/VolunteerGroups/Pages/Exposure-Assessment-Strategies-Committee.aspx

I will cover this important tool in a future blog.   It will tell you how good your data really are at predicting exposure and risk.

If you want a very sobering experience.  Download the free app IH  DIG (by Adam Geitgey) on your Android device (available at the Play Store) and see how good you are at predicting the actual exposure potential using the above criteria of "safe" from a few measured values.   Like I said, it is a very sobering experience.

Modeling exposure has the same issue.  If you are honest about the variables you put into the models you know that they are not single values but distributions as well.   That means that the model output of estimated exposure is also a distribution of exposures which can be compared to an exposure limit.  Monte Carlo analysis is the best way to gauge the input distribution and obtain an output distribution of predicted exposures. Not surprizing, most output distribution appear to be shaped like lognormal curves.  I will go over a simple example in a future blog but the point is that there will almost always be some level of predicted exposure in these distributions that is above the exposure limit. 

So "how safe is safe?”  It turns out that it is a question to be decided by the body politic as a subjective judgment.   I personally think the 5% level of exceedance mentioned above seems reasonable to me but that is just my opinion.   The point here is that there is almost always some level of predicted exceedance based on the inherent variability of reality.
I think it is important to let the client in on this game of uncertainty analysis to show him/her that there is no such thing as absolute safety only relative safety expressed in terms of uncertainty.

Just to really complicate matters, the above is just the exposure half.   Can we really think that there is no uncertainty in the toxicity benchmark or exposure limit half as well?   More above this in next week's blog.


2 comments:

  1. Nice post.

    I would add that for toxins with chronic effects, variability around the mean will be important for compliance but less so for health. On the other hand, with an acutely acting agent, variability around the mean would be important for both.

    ReplyDelete
    Replies
    1. David,

      Good point. There is a damping effect for toxins with a long half-life in the body such that daily variation is not as critical.

      Delete