LinkedIn

Monday, October 27, 2014

Confession : The math I should have learned in school

Let’s face it; a lot of us in the IH profession are at some level intimidated by math.  I confess to the syndrome and I believe that  this fear is one of the reasons that modeling is still not generally done or even attempted by everyone.   Poor teachers, poor classes, and poor motivation as a teenager are all possible reasons.   I have experienced all of them, but significantly later in my professional life, well after the age of 30, I recognized the need.   I saw that math was the real basis of physical science and a tool I had to have if I was going to understand and work as a technologist.  At that point, I decided to back-fill my education and was fortunate to find some very good teachers in night school.

I have pushed modeling in the IH profession for many years and have seen a steady increase in its use as dedicated professional colleagues in the AIHA have picked up the banner and have written books and articles and given excellent courses.    I continue to believe, however, that there is only a relatively small percentage of IH professionals who might benefit from this tool are actually engaged.  My new hypothesis is that math intimidation is the reason.

The overarching purpose of this blog is as an “ An educational blog designed to introduce and facilitate industrial hygienists' involvement in quantitative risk assessment - especially exposure assessment and the specific area of exposure modeling.”   If foggy math concepts are keeping folks from engaging then I want to try and help.   I would like to help and back-fill your education relative to some basic mathematical concepts that will provide you with very useful tools.

At the end of this blog I am going to cut and paste some stuff from last week’s blog in which I tried to explain two-dimensional acute inhalation toxicity (both concentration and time).    To provide a good explanation of the subject I needed to explain PROBITS.   If your understanding of probits is a bit foggy, I think this explanation below could help.   If it doesn't explain it clearly please let me know where I lost you.   You can reply anonymously and, believe me, if you have the issue others will as well.  Indeed, please send me math topics that remain foggy to you.  Send me math topics that you just quit thinking about because you thought they were too hard.

Other areas I could try and provide some simple explanations as to how they work and why they are useful include:

  • Logarithms (done in a previous blog but it could be repeated and improved)
  • Calculus:  Area under the curve
  • Statistics:   correlation, curve fitting, standard deviation, linear regression
  • Mathematical definitions of inhalable, thoracic and respirable airborne concentrations and mass


PROBITS (and their use in acute toxicity – lethal dose-response example):

You may or may not remember what a “probit” is but it is a very useful mathematical construct.  It is directly related to the standard Gaussian bell-shaped curve with the area-under-the-curve (AUC) describing the portion of a population included on any part of that curve.  I know what some of you are thinking:  YIKES!  What is this guy talking about?  All I ask is that you Please stay with me on this for a few more paragraphs!  Like a professor of mine once said, “If it’s foggy you’re learning something!”   A few minutes of concentration can pay off in a lifetime of understanding. 

Here is an illustration of a Gaussian or bell-shaped curved:




The peak of the bell-shaped Gaussian curve is right at 50%.   That is, half the folks are in the area-under- the-curve (AUC) below (to the left of) the peak and half are in the AUC above (to the right) of the peak.   This is the average or mean value.   It is also Probit = 5.   Now I am going to ask you to remember a statistical construct you learned called the standard deviation .   The AUC from one standard deviation (or sigma on the above illustration) above the mean on the Gaussian curve is approximately 84%; that is, 84% of the population is in the AUC to the left of one standard deviation above the mean.   One standard deviation above the mean is also Probit = 6.   So 1 sigma above the mean  = probit = 6.   Because it symmetrical, Probit = 4 is one standard deviation below the mean and only 16% of the population are in the AUC to the left of this value.  

Still foggy?  Let’s put this in terms we all might understand:  SAT scores.    The average or mean SAT test score is a Probit multiplied by 100; that is, Probit 5 x 100 = 500.  Half the folks taking the test got a higher SAT score than 500 and half lower.   If you got an SAT of 600 you did better than 84% of the folks taking the test.    If you got a 733 on the SAT you were better than 99% of the folks who took that test.   The computer stops at 800 because you are getting so close to 100% that it does not matter any more.  How did I figure that a 733 SAT score beat 99% of the folks tested, it is a simple function in Excel.  Just put in (=NORMSINV(0.99)  + 5) and you will get 7.33, multiply by 100 to get the SAT score.

We all probably remember living and dying with “the curve” in college.  The raw tests score were converted to probits and, depending on the teacher, the grades assigned such that there was something like 10% “A”s, 25% “B”s, 55% “C”s and perhaps 10% “D”s or lower.   This is how you could get 40 out of 100 correct on a physics test and still get a “B”!    This is also why we all hated the person or persons who “killed the curve” by scoring very high and dragging the mean upward and everyone else's grade downward.

So let’s shift our thinking over to toxic or lethal dose-response which also follows this curve.   Probit = 7.33 (5 + 2.33) means that 99% of the population will respond with the toxic effect being measured, in this case death.   Since it is symmetrical, Probit = 2.67 (or 5 – 2.33) means that 1% of the population will be predicted to die and 99% predicted to live.    You can never get there but you can get as close to 0% deaths as you like with smaller and smaller Probit values.

Please let me know what you want to see and whether I am just talking to myself. 

Do you think I have it right about math intimidation in the IH ranks or do you think that I am all wet on this?






Monday, October 20, 2014

Modeling Acute Toxicity Dose-Response

By definition, things happen quickly during acute toxic responses to chemical exposure.  Indeed, adverse health effects from inhaling an acute toxicant can happen in the time-frame of seconds to tens of minutes.   Haber’s Rule advises that the toxic effect should be a linearly combined function of both concentration and time.  The classic expression of Haber’s Rule is: 

  Toxic Response = f ((Concentration)(time)).   

This mathematical expression says that when the breathing zone concentration is twice as high (e.g., 200 ppm versus 100 ppm) the time of exposure needs only be half (50%) as long to get the same toxic response.

The majority of acute toxicity dose-response modeling that has been done to date deals with lethality from inhalation.  Here acute lethality inhalation testing is done with rats in a time frame of tens of minutes extending to hours in duration.  These data are then modeled with the intention that these models will be useful for predicting human risk.

Relative to acute inhalation toxicity, Haber’s rule needs some adjustment.  The modified relationship that fit reality better is:
  Lethality = f ((Concentration)n(time))    n > 1   (but typically < 4)

In this relationship, the inhaled concentration has a non-linear effect on lethality.  For example, at n = 2, when the concentration is twice as high the time of exposure only needs to be 25% as long to render the same response.

Some of the earliest work on this was done in the Netherlands and, I believe, that much of the modeling tools were developed by Dr. Wil tenBerge.    The current standard equation used to fit animal data is:
  Probit = a + b ln (Cnt)

a, b and n are coefficients.   C is expressed either as mg/m3 or ppmv.

You may or may not remember what a “probit” is but it is a very useful mathematical construct.    It is directly related to the standard Gaussian bell-shaped curve with the area-under-the-curve (AUC) describing the portion of a population included on any part of that curve.  I know what some of you are thinking but please stay with me on this!  Like a professor of mine once said, “If it’s foggy you’re definitely learning something!”    

The peak of the bell-shaped Gaussian curve is right at 50%.   That is, half the folks are in the area-under- the-curve (AUC) below (to the left of) the peak and half are in the AUC above (to the right) of the peak.   This is the average or mean value.   It is also Probit = 5.   Now I am going to ask you to remember a statistical construct you learned called the standard deviation.   The AUC from the value of one  standard deviation above the mean on the Gaussian curve is approximately 84%; that is, 84% of the population is in the AUC to the left of one standard deviation above the mean.   One standard deviation above the mean is also Probit = 6.    Because it symmetrical, Probit = 4 is one standard deviation below the mean and only 16% of the population are in the AUC to the left of this value.  

Still foggy?  Let’s put this in terms we all understand:  SAT scores.    The average or mean SAT test score is a Probit multiplied by 100; that is, Probit 5 x 100 = 500.  Half the folks taking the test got a higher SAT score than 500 and half lower.   If you got an SAT of 600 you did better than 84% of the folks taking the test.    If you got a 733 on the SAT you were better than 99% of the folks who took that test.   The computer stops at 800 because you are getting so close to 100% that it does not matter any more. How did I figure that a 733 SAT score beat 99% of the folks tested?  Ans: It is a simple function in Excel, just put in (=NORMSINV(0.99)  + 5) into a cell and you will get 7.33, multiply by 100 to get the SAT score.

We all probably remember living and dying with “the curve” in college.  The raw tests scores were converted to probits and, depending on the teacher, the grades assigned such that there was something like 10% “A”s, 20% “B”s, 50% “C”s and perhaps 10% “D”s or lower.   This is how you could get 40 out of 100 correct on a physics test and still get a “B”!  This is also why we all hated the person who “killed the curve” by scoring very high and dragging the mean upward and everyone with lower scores downward.

So let’s shift our thinking back to dose-response.   Probit = 7.33 (5 + 2.33) means that 99% of the population will respond with the toxic effect being measured, in this case death.   Since it is symmetrical, Probit = 2.67 (or 5 – 2.33) means that 1% of the exposed population will be predicted to die.  You can never get there but you can get as close to 0% as you like with smaller and smaller probit values.  So let reproduce the above equation here:
Probit = a + b ln (Cnt)

Given any value of breathing zone concentration (C) over any time interval (t) and the fitted values for a, b and n (from animal studies) we get a predicted percentage response expressed as a probit.   At Probit = 9 everyone is predicted to respond.  At Probit = 1 essentially no one is predicted to respond or be adversely effected by this concentration over this time interval.  Using another function in Excel (NORMSDIST()) we can easily convert the probits to percentages predicted to respond.

A previous blog here discussed bolus exposures to acute toxicants.  Exposures that occur in a time frame of seconds to minutes.   In most cases we are not dealing with lethality but we could be encountering serious respiratory irritation from this short term exposures.   If we had good toxicological data on these responses at various C,t points we could model the percentage of the exposed population predicted to have a respiratory irritation response.   

As I mentioned in a previous blog, Dr. Wil tenBerge is an incredibly generous colleague who shares all of his models and software on his web site.  Just put in “home page Wil tenBerge” into Google.   http://home.wxs.nl/~wtberge/   He also has a considerable amount of very good educational material explaining this further as well as quite a few data sets of C,t rat lethality for chemicals like ammonia.

Questions for Discussion:

In a previous blog I discussed that potent chronic carcinogens like nitrosamines can have serious acute inhalation irritation potential. Do you have “chronic” toxicants in your work place that you are controling to 8 hour OELs than might also be acute irritants?   How would you evaluate and control this acute risk?

Is anyone out there aware of some available in-vivo or in-vitro toxicological testing protocols that could evaluate acute C,t irritation potential?   If so please share.




Monday, October 13, 2014

USE is KING in Ranking Human Exposure Potential to Chemicals but No One is Listening

Mandates that require the estimation of exposure and human health risk posed by large numbers of chemicals present regulatory managers with a significant challenge.   Although these issues have been around for a long time, the estimation of human exposure to chemicals from their use of products in the workplace and by the consumer has been generally hindered by the lack of good tools. Indeed, one would think that the logical and most cost-effective approach would include an initial attempt to rank-order or prioritize the chemicals according to the human exposure potential that each might pose.  
This is not so easily done.   Indeed, chemicals used in general commerce, that is, chemical used in the workplace and in commercial and consumer products have always presented a number of challenges for regulators. First, there are a large number of chemicals in use, for example, European Inventory of Existing Commercial Chemical Substances lists about 100,000 chemicals whereas the current US inventory of existing chemicals under the Toxic Substances Control Act (TSCA) is approximately 70,000.  Second, there are a wide range of chemicals and products used in the workplace and by consumers. Third, one product may result in a number of different exposures to different individuals by different routes over the product’s life cycle. Finally, the nature of the chemical exposures that result from the use of these products is highly variable because of differing use patterns.
In general formal efforts to “rank” or prioritize chemical sources have been primarily based on surrogates of exposure (e.g., annual produc­tion volume or physical and chemicals characteristics). Clearly, these approaches at best can only provide qualitative or quasi-quanti­tative results with questionable effectiveness. 
Consider a chemical with a nasty physical/chemical profile (an organic with high volatility, high octanol water partitioning coefficient) that is manufactured in very large quantities but mostly in enclosed manufacturing facilities and mostly gets reacted before it ever gets to expose either workers or consumers.   Compare that to a product with very limited production volume and what might be considered “mild” physical/chemical properties from an exposure potential BUT is used in intimate contact with millions of person during its use. The above ranking tools could dramatically miss classify the exposure and potential risk of these two hypothetical chemicals.
A number of years ago, my colleagues in The LifeLine Group and I worked on this problem for Health Canada.  We ultimately developed and published an approach we call CEPST.  This particular prototype tool was developed for “nearfield” sources of exposure.  We explicitly separated these “nearfield” sources from those in the “far field.”   Farfield sources are defined as relatively large but initiated as typically distant sources or emissions to the general environment (air, water, soil). Think of smoke stacks and large discharges to water.  Nearfield sources are those that occur within the microenvironment of a residence or literally at arm’s length for the exposed person.  Think of hair spray.  The nearfield has been shown to be the dominant milieu for human exposure for many if not the vast majority of chemicals especially those that neither are persistent, bioaccumulative nor discharged at relatively high levels to the general environment. Thus, a critical challenge for any regulatory approach designed to really understand human (especially consumer) exposure to chemicals lies first in the recognition and elucidation of both the farfield and the nearfield exposures. 

CEPST in its current form focuses on the nearfield but it could easily be expanded to include exposures from the farfield.

CEPST functions on the input received during four activities conducted for each chemical under consideration:

  1. Chemical identification and physical properties (Chemical Abstract Service (CAS) number, physical properties from available sources)
  2. Internet search for chemical use
  3. Expert panel deliberation and determination of sentinel product(s) for that chemical
  4. Modeler assignment of chemical/scenario dependent variables
 

The important thing to understand here is that CEPST focuses on the chemicals’ USE in products as it primary aim.   Indeed, we continue to believe that this is the most direct and accurate manner of understanding and ranking the human exposure potential of a large number of chemicals. Unfortunately, it has not been embraced for further developed by any large regulatory entity and I personally think that failure has been a lost opportunity.

I would be happy to send you a pdf copy of the CEPST paper if you email me at mjayjock@gmail.com.  Also, I would be most interest in your response to the following questions:


Do you have to rank order chemical exposures in your professional life and if so what tool or approach do you use?

Even if you do not have to do it in your current job, how would you approach the exposure/risk ranking of hundreds or thousands of chemicals used in commerce?

Monday, October 6, 2014

Toward Better Toxicological Benchmarks, that is, Better OELs


Sometimes we give interviews to folks in the press and the message that is printed is different from the message that we provided.   This did not happen in the subject article.   Not only was it not distorted, it was expertly woven into a coherent story given a broad base of other input.  It was written by Ed Rutkowski (Editor in Chief of the AIHA Synergist) who is obviously a good thinker and brilliant writer.   He interviewed me and a number of colleagues on the general subject of occupational exposure limits (OELs).    Even though the subject is filled with technical and political nuance, Ed did a masterful job of putting it all together into a very coherent picture. 

I am reproducing below the 3 side-bar extracted quotes from the piece to whet your appetite and encourage you to download it.

“People can actually look at [RBOELs] and get some idea of the quantitative risk and, perhaps more importantly, the uncertainty around that prediction.”  

“One of the fundamental challenges for the industrial hygiene profession moving forward is to help practitioners answer the question, ‘Which OEL should I use?’”

“Understanding and communicating the uncertainty around those predicted risks and trying to be more open scientifically about what we know and don’t know, would be big steps forward.”

The article is free to all online.  All you have to do is put the following into a Google search in either Chrome or Internet Explorer:

“synergist supplement september 2014 benchmarks 22-23” 

In my experience, you will not find a better explanation of the current state of OELs.

Dr. Jimmy Perkins and I have been working on an initiative to get the dialog on OEL development moving forward.   We call it (after a name provided by Frank Hearl) Risk@OEL.    Other work in my one-man shop on worthy projects has caused me to drop the ball on this temporarily this summer but I hope to pick it up again later this year or early next year.   It will definitely be the topic of future blogs.   In the meantime, I would love to hear back from you, before or after you read the article, on:


How you see the current state-of-the-science or state-of-affairs related to the OELs we use in the IH community in our efforts to gauge the risk of the actual exposures we measure or estimate?