Monday, December 30, 2013

The Eddy Diffusion Near Field Model is Now Useable

I am going back to a “nuts and bolts” piece on inhalation exposure modeling this week.   The subject is a near field model that has been around for many years but was not very useful until recent work has promised to make it so.

The model is the Eddy Diffusivity Model.   The basic model is presented below:

It may not look like it but the math is pretty straightforward especially if you let IH MOD do the work (see previous blog MODELING MATH MADE EASY OR AT LEAST EASIER to learn about IH MOD Excel Spreadsheet Models and documentation).    Conceptually, the model is pretty simple.   If you have a small source its vapors will radiate out as a sphere if it is suspended in air,  They will radiate as a hemisphere if it’s on a flat surface, as a quarter-sphere if it is on a floor surface along a wall and as a 1/8 sphere if it’s in the corner.  The above equation is for a sphere, the 4 become a 2 for a hemisphere, a 1 for a quarter-sphere and 0.5 for a 1/8 sphere.  What is cool about it is that the concentration is a continuously decreasing gradient as you go away from the point source.   That is, as the distance from the source (r) increases then C decreases.   It does NOT need or use the well-mixed assumption of the 1 zone or 2 zone models.

Seems like it would be the ideal model for such sources but there was one major problem.   All the parameters in the model are relatively easy or straightforward to estimate or measure except D.  Indeed, the predictions of this model are highly depended on D as defined above.   D is dependent on how the air moves about randomly in the indoor environment and it has historically proven itself to be very difficult to measure.   As a result we have had to use a very wide range of estimates for D and as such the utility of this model was quite limited.

Enter some sharp researchers from Stanford University and their work on estimating D from parameters in the room that are much easier to measure; namely, ventilation rate expressed as air changer per hour (ACH) and the dimensions of the room.  They published their work in the journal of Environmental Science and Technology (ES&T) which has a very good reputation.    This part of that paper boils down to the following simple regression relationship:

D  = L2  (0.60 (ACH) + 0.25)/60    (units:  m2/min)
                L = cube root of the room volume (m)
               ACH  = mixing air changes per hour in the room volume (hr-1)

The R2  regression fit for this sub-model is 0.70 which means that 70% of the relationship between D and the room volume and ventilation and D is explained or predicted by the model and about 30% is unexplained or random noise.  In my experience, given the other uncertainties involved, this is pretty good.  This algorithm is applicable over an ACH range of 0.1 to 2.0.    Dr. Kai-Chung Cheng was first author on this paper and it is my understanding that he is pursuing additional work to sharpen up this relationship and to add to its applicability.   Dr. Rachael Jones (University of Illinois at Chicago, School or Public Health) is also a brilliant modeler and a very active researcher in this area.  I understand that she is also planning research to deepen our quantitative understanding of these relationships.   In the mean time I have put the above algorithm to estimate D into a spreadsheet which I will happily send to whoever asks me for it at

I plan to use it whenever I use the 2 box model (see previous blog: THE MOST VERSATILE AND WELL-TESTED INHALATION MODEL) to compare the results and try and learn something about what these different models are telling us. 

The reference for the Stanford paper is:

Kai-Chung Cheng, Viviana Acevedo-Bolton, Ruo-Ting Jiang, Neil E. Klepeis, Wayne R. Ott, Oliver B. Fringer, and Lynn: Modeling Exposure Close to Air Pollution Sources in Naturally Ventilated Residences: Association of Turbulent Diffusion Coefficient with Air Change Rate M. Hildemann, | Environ. Sci. Technol. 2011, 45, 4016–4022

I am quite sure that Dr. Cheng will be happy to send you a pdf copy if you write to him at: KaiChung Cheng


Monday, December 23, 2013

Dimensional Analysis is an Important Modeling Tool

Modeling is about units and keeping the units straight is critical.   Dimensional analysis assures that we are comparing “apples to apples” and that we are in the correct ball park with our answers.  Most of you probably already know this but some of you perhaps do not.

I once reported an answer that should have been in units of milligrams (mg) as micrograms (µg) and thus released a report with an error of 1000x !   That mistake provided two lessons.   First, ALWAYS have some peer review by a trusted colleague and second, take your time and do a thorough dimensional analysis of your math. 

Most exposure models represent a string of algebraic calculations.   Sometime the string can get pretty long and complicated with all the factors that go into making the model prediction on the left hand side of the equation and the answer (either final or intermediate answer) on the right side of the equal sign.  If you break it down using dimensional analysis, it becomes much easier to handle.

Let’s do an example of a relatively simple equilibrium concentration model with constant source rate and ventilation rate:   C = G/Q     Note:  we want our answer in mg/ m3

The scenario is an open drum evaporating into a room at room temperature.

For Q we are told that it is a 30 m3 room with 0.5 mixing air changes of fresh air ventilation per hour.
So  Q = (room vol)(air changes per hour) =  units of  (m3)(1/hr) or m3/hr  ans:  15 m3/hr

Simple enough, indeed I think we have all done this; however, look closely at what we really did.  We multiplied a variable with units (m3) times a variable with units  (1/hr or hr-1) to get an entity with units m3/hr.   That is really all there is to dimensional analysis.

Let’s say we measured the evaporative loss of liquid from the drum over time as 2 grams in 400 minutes.   That is 2/400 or 0.005 grams/min; however, we are looking for units of mg/hr.     So:

G = (0.005 grams/min)(60 min/hr)(1000 mg/gram) =   300 mg/hr   (we cancel out the grams and the minutes and are left with mg/hr

As such,  we have C = to G (300 mg/hr) divided by Q (15 m3/hr).   For clarity I am just showing just the units or dimensions below:    
                 (mg/hr)/(m3/hr) and the hrs cancel out leaving mg/m3

In the above equation we are left with 20 mg/m3 as an estimated equilibrium airborne concentration in this room.  

If we knew the molecular weight of the compound we could calculate the concentration as volume parts per million (ppm v/v) in air using the molar volume (gaseous volume of 1 mole of any gas or vapor) of 24 liters (L) at 25C.   For a vapor at 1 mg/m3 with a MW of 100 g/mole we can determine the linear conversion factor:

(1g/1000mg)  (24L/mole) (1 mole/100 g) (0.001 m3/L)(1 mg/m3)= 2.4 x 10-7 (unitless conversion factor)

Thus, for every 1 mg/m3 of a gas (with MW 100 at 25C) there are 2.4 x 10-7 parts of gas per volume parts of atmosphere.  Multiply this part per part by one million (106) and you have the parts per million conversion factor of 0.24 for the conversion between mg/m3 and ppm v/v or 1/0.24 =  4.2 to convert ppm v/v to mg/m3 for this gas at this temperature.  The dimensional analysis for this is below:

For every 1 mg/m3 there are (2.4 x 10-7 parts/parts) (1,000,000 parts/million parts by volume) =  0.24 ppm v/v and the receprical 1/0.24 or 4.2 mg/m3 for every ppm v/v of this vapor.  For our example it would be 20 mg/mtimes 0.24 or 4.8 ppm v/v assuming its MW was 100 and it was at 25C.

If this explanation of dimensional analysis is a little fuzzy, I found one that is clearer and is only 9 minutes long on YouTube:

Believe me, dimensional analysis is your friend and will help to keep you sane in doing problems associated with modeling.

Monday, December 16, 2013

Risk Assessment Uncertainty or How Safe is Safe? Part2 Exposure Limits

In the last blog I discussed the inherent uncertainty around measured or model estimated exposure.  This week it is time to talk about the uncertainty in any exposure limit.
We have all seen changes in the exposure limits we have used over time.  The changes are almost invariably downward toward lower limits.   Does this mean that the chemical became more toxic?   Of course not, it just means that the uncertainty inherent in that particular exposure limit was not very well handled.  To guard against these surprises, I believe that uncertainty should be explicitly addressed during the documentation process. 
The current definition of the risk present at the exposure limits that most of us use is that exposures controlled to these limits will protect “nearly all”.    Although the intent is clearly to protect the vast majority of folks exposed at the limit, there is currently no attempt to quantify what is meant by “nearly all”.   For a long time I have thought that the level of risk present at any exposure limit worthy of documentation should be quantified to the extent possible and, more important, the uncertainty around that estimated quantitative level of risk should also be provided.
In truth, the risk of an adverse health effect occurring is a distribution of values which is low at low exposure levels and high at high exposures.   The exposure limit is but one value on that distribution.  We (Jerry Lynch, Phil Lewis and I) wrote a paper in 2001 about how one could estimate the risk at any exposure limit and how the uncertainty might be estimated.  I would be happy to send a copy of that paper to anyone who asked me at    A more definitive scientific treatment of this subject was put forth in 2009 by in the National Academy of Sciences – Science and Decisions: Advancing Risk Assessment, also known as the “Silver Book”.   The hard copy of the book will set you back about $55 but the NAS offers it for FREE as a PDF Download!
The meat of this subject is in Chapter 5.
So in the final analysis, risk is a combination of uncertain (a distribution of) exposure and (a distribution of) hazard (or toxicological response).  Combining both distributions presents an output distribution of risk at any particular nominal or median exposure.   If the following conditions are met then the risk will be shown to be relatively low or “safe”:
·         An exposure limit that is relatively high versus the median estimated exposure.
·         The distributions for exposure and exposure limit are relatively narrow such that they do not have a lot of overlap.

Please note there will still be some finite level of predicted risk – it will never be zero.

When the exposure goes up relative to the exposure limit and/or the distributions for exposure or exposure limit are relatively wide then the predicted potential risk goes up as well. 

I believe that this is how we might start to get our arms around “How safe is safe?” 

Describing uncertainty in this or a similar manner will keep us from being surprised like we have been in the past.  It is also important to understand that much (perhaps most) of the uncertainty in the estimated hazard (exposure limit) is a result of our lack of knowledge around the actual mechanisms of toxicology.   Some modeled exposure estimates are also fraught with this uncertainty born of a lack of knowledge.   Thus, this type of analysis will also show us where we need to sharpen up our tools to narrow either the exposure limit or exposure distributions and allow much more confident estimates of risk for our clients.

Monday, December 9, 2013

Risk Assessment Uncertainty or How Safe is Safe? Part1 Exposure

In the last blog I discussed the client’s expectation that the risk assessments we do represent our professional certification of the relative safety of any scenario under consideration.   Of course, the thoughtful reader will then question:  What is safe?  
The above assumes that the risk assessment will end with a “happy face”.   That is, that the scenario is deemed in the report to be relatively safe.   The reality is that I have rarely written an assessment that was not so.   Most clients do not want a determination of significant or unacceptable risk documented.   Typically, if the client has committed to doing a risk assessment then they are committed to either refining the assessment (with additional testing and data) to the point of allowing a conclusion of safety (see previous blog) or applying risk management options that choke down the exposure and reduce the risk to acceptable (or at least not unacceptable) levels. 

Again we are at essentially the same question:  What is safe or at least not unacceptably risky?

One answer to that question is that a “safe” exposure is an exposure that does not exceed the exposure limit.   For the purpose of this blog we will assume that the exposure limit is a “bright line” that defines a safe exposure and then look at it from the exposure end of things.    The factors that make up exposure are not constant and indeed they are quite variable.  In fact, if you look at monitoring data for the same person doing the same job, the spread in values is quite large and is often described as a lognormal distribution with a geometric standard deviation (GSD) of 2 or greater.   When we have a GSD of 2, it means that the ratio of the 84th percentile/50th percentile of this distribution and the 50th %tile/16th %tile is equal to 2.     Thus, the 84th%tile/16%tile is 4 fold.  That still leaves 32% of the exposures either less than 1/2th or greater than 2x of the median exposure.   As practical example, a measured distribution with a median exposure of 100 and a GSD of 2 will have 16% of its values below 50 and 16% above 200.     If the exposure limit is 200 then 16% of the time the exposure limit will be exceeded by the exposure.

Considering such statistics, many in our profession consider an exposure “safe” or at least in compliance if it does not exceed the exposure limit greater than 5% of the time.   Thus a median exposure of 100 with a GSD 2 would not be considered “safe” given an exposure limit of 200.   The median measured exposure would have to be significantly lower than 100 assuming the GSD remains at 2.   

The above is an ideal case, when we have a lot of data and can accurately estimate the actual distribution of exposures. 
Consider what most often is the case.  We take a few samples and if they are below the exposure limit some of us might often declare the situation safe.    For the above example, it should be obvious that we should do some statistical analysis on the samples we take.  IH STAT was designed to do just that. This important tool for evaluating our monitored data is available at:

I will cover this important tool in a future blog.   It will tell you how good your data really are at predicting exposure and risk.

If you want a very sobering experience.  Download the free app IH  DIG (by Adam Geitgey) on your Android device (available at the Play Store) and see how good you are at predicting the actual exposure potential using the above criteria of "safe" from a few measured values.   Like I said, it is a very sobering experience.

Modeling exposure has the same issue.  If you are honest about the variables you put into the models you know that they are not single values but distributions as well.   That means that the model output of estimated exposure is also a distribution of exposures which can be compared to an exposure limit.  Monte Carlo analysis is the best way to gauge the input distribution and obtain an output distribution of predicted exposures. Not surprizing, most output distribution appear to be shaped like lognormal curves.  I will go over a simple example in a future blog but the point is that there will almost always be some level of predicted exposure in these distributions that is above the exposure limit. 

So "how safe is safe?”  It turns out that it is a question to be decided by the body politic as a subjective judgment.   I personally think the 5% level of exceedance mentioned above seems reasonable to me but that is just my opinion.   The point here is that there is almost always some level of predicted exceedance based on the inherent variability of reality.
I think it is important to let the client in on this game of uncertainty analysis to show him/her that there is no such thing as absolute safety only relative safety expressed in terms of uncertainty.

Just to really complicate matters, the above is just the exposure half.   Can we really think that there is no uncertainty in the toxicity benchmark or exposure limit half as well?   More above this in next week's blog.

Monday, December 2, 2013

Balancing the Risk Assessment Client’s Needs with Yours

I used to work at the now defunct Rohm and Haas Company.  For many years I did risk assessment for the businesses.  I mentioned to a colleague once that I was having trouble figuring out who the client was on a particular project I was working on.   He seemed perplexed and ask me what I meant by the term “client”.    I told him that clients are the folks that get and use our analyses.  Doing risk assessment in a corporate setting they may not be (and often are not) the ones who we report to or those who determine our rank and salary but they are critical nonetheless.   To the extent that we do a good job for them is the extent that we remain gainfully employed.

It should be noted that in our business we have both clients and charges.   Clients are roughly defined above but our charges are the folks on the receiving end of the exposures that we estimate.   We have a professional and moral responsibility to all these folks to get it right.

Clients, because of their position, can be typically demanding.   Clearly and appropriately, they want to get an answer that satisfies their needs using the least possible expense in the process.  For your part, you essentially want to do the same; however, it is our responsibility to render these answers in a realistic manner.   We need to admit to and deal with the inevitable uncertainty borne of any analysis and put that uncertainty into context for the client.

A prime example of this balancing act for me comes to mind involving an additive that was used in motor oil.   The additive existed in new motor oil but not in oil that was used.   I was asked to do a risk assessment on the additive in this application.  Data and modeling indicated that inhalation exposure was not a factor; however, dermal exposure to fresh oil during the oil changing process could be.    I assumed the following scenario.  
  •  Commercial oil changing (e.g., Jiffy Lube)
  • 10 oil changes per day
  •  Fingers of both hands covered with new oil during change
  •  Instantaneous and complete absorption of the additive by this dermal exposure

Let me know if you are interested in some of the details of the subsequent assessment and I will let you know by email or cover it in a future blog if enough folks are interested.

Because of my own experience at changing oil (I am very sloppy) and my lack of data otherwise, I felt comfortable that this scenario would definitely and appropriately OVERESTIMATE the exposure potential of this material.  More important, given a classical precautionary approach, I did not feel personally comfortable changing any of these assumptions on my own.

The client argued that it was indeed a worst case and told me that the above assumptions should be less stringent.   At that point, I told him/her that it was in fact their business and that they definitely should have more information/insight than I relative to these assumptions and that they were free to change any of them. However, I would need them to write down their assumptions and the bases for them which would be incorporated into the risk assessment as a reference.   Faced with this possibility, they declined to take this approach and agreed with the above assumptions as a working worst case.

An alternative approach would be to commission studies of commercial oil changing facilities to determine a distribution of number of changes per day and a dosimeter (e.g., washed cotton gloves that would be extracted and analyzed afterwards) study of amount of new oil that gets on the hands of these workers per oil change.   Another approach would be to do a dermal absorption study using human or animal skin.  (Note: I will get into dermal absorption testing and modeling in a future blog)

Both of the above approaches could be quite expensive but would almost certainly significantly lower the estimated level of exposure to workers.

The bottom line here is that I had to draw line relative to where my comfort level was regarding these assumptions.   I had to use my best judgment as to my skill level to ultimately trade conservatism for data and vice versa.    The client needed me to tell him/her (i.e., to professionally certify) that their product was “safe” in its intended use.  Indeed, I needed to provide an analysis that accomplished this same end for me as well.

Ultimately, the assessment using the above assumptions did not serve the client’s needs.   Indeed, it turned out that more data was needed and obtained to make the case for safety in which both the client and I were comfortable with the results.   The client, of course, became temporarily poorer having paid for the study and data but ultimately richer in the confident knowledge of that their product was safe.   Arguably the “charges” or folks receiving the exposure in this assessment were also reasonable well-served.

This brings me to the topic of the next blog:  Risk Assessment Uncertainty or How Safe is Safe?

Sunday, November 24, 2013

More Modeling Links PLUS Using OLD Models on New PCs

One of the very best things about doing a blog is the almost real-time connections with colleagues in far off places.  Without leaving my home, I can get to interact and network with some marvelous folks who can literally be anywhere.

Many of you have, no doubt, seen the material sent out at regular intervals by Andrew Cutz.    Andrew provides a real service and a recent item from him really caught my eye.   He put out a notice of a new EPA web site:  EPA-Expo-Box (A Toolbox for Exposure Assessors).   This can be found at:   It is a remarkably complete and well organized resource outlining, what appear to be, most of the EPA resources on this topic.   I drilled down a little into the page and found a page that list a bunch of exposure modeling tools developed by and for the EPA.   The page is    If you start digging into any of these and have some question, just send me an email and I will try to answer them.  If they are of general enough interest I will address them in a future blog. 
Theo Shaeffers from the Netherlands has come through again with a revised version of his website.  An excerpt from a recent email he sent to me is below: 

Hi Mike
Updated the site with more interesting freeware and moved  the Dutch texts to the end.
End paste.

This site is a “list of lists” or a compendium site with tons of resources.  If you click around on this TSAC site you may come to:  which shows all the EPA OPPT’s Exposure Assessment Tools and Models.   You will notice if you go to this page that quite a few of the models are dated well before the advent of Windows 7, 8 or 8.1.    They remain very good models but they have not been updated to specifically run on these newer operating systems.   Indeed, some of them do not run on the newer Operating Systems.

A prime example of the above became painfully obvious to me last week.  I needed to install and run the relatively user-friendly and freely available model shown within the above link entitled:  Multi- Chamber Concentration and Exposure Model (MCCEM) version 1.2 (2/2/2001) available online at:

MCCEM is a remarkable model in that is allows for the estimation of airborne concentrations in multiple compartments of a home or other compartmentalized buildings from sources in individual rooms or zones.   Given ventilation rates (or using rates from an extensive database of homes) it calculates the airborne concentrations but also the dose of persons in these compartments given their time-spent and breathing rate inputs.   It also allows for Monte Carlo uncertainty analysis while allowing the user to place people and sources in various volumes during the day to receive their exposure which the model duly calculates.   It provides output reports and a .csv file of the concentrations at each time interval for which the model is run (I typically prescribe every minute).
The MCCEM model has been around for some time and runs very well on Windows 98 and Windows XP but may encounter some compatibility issues on later version of Windows (7, 8 and 8.1).  Indeed, neither I nor my IT Guru Steven Wright ( could get it to run on Windows 8.1.   Eventually, the most effective way we found to run it on newer PCs is to first install Windows XP as a virtual machine (e.g., Oracle VM VirtualBox as a free download) on a PC running the later version of Windows and then install the MCCEM software. 

These are models  “oldies but goodies” and someday someone may update them for the new Windows operating systems but until then we are just going to have to cope.

Sunday, November 17, 2013

Learning from our Mistakes in Modeling

After the discussion of the octanol/water partition coefficient in the last blog, I thought this would be a good point to go over what was perhaps my biggest professional surprise and how we learn a lot from our mistakes.   What I am recounting below is my recollection of the facts.   To “blind” the identity of the product some of the numbers are changed but the scientific facts remain unchanged.

I was responsible for a risk assessment for a water treatment chemical that was used to treat large circulating systems within a plant.  The active ingredient in the product was highly lippophillic (had a high octanol water partitioning coefficient) and limited water solubility (about 500 ppm) (see previous blog on this subject).   It was used in the water at about 5 ppm by weight.   The vapor pressure of the active ingredient is relatively low such that the saturation or head space concentration of the pure material (see previous blog on this subject) was only about 1 mg/m3.    The exposure limit was relatively low at 0.4 mg/m3 based on its ability to cause local tissue upper respiratory irritation response seen in rat inhalation studies at relatively low concentrations in air.  

So I ran the models (UNIFAC and modified Raout’s Law):  Even with a large thermodynamic activity coefficient (see previous blog on UNIFAC) the worst case saturation concentration of this active ingredient at 5 ppm in water was a VERY small portion of the exposure limit.   I told my internal clients that I could not see any problem with inhalation exposure and risk from this active in this application. 

Imagine my surprise when we started getting reports of upper respiratory irritation in systems where older actives were being substituted with our new product.   I requested that some air monitoring be done in selected area within a specific plant.   To my complete and utter amazement, some of the values came back around 0.5 mg/m3.   This is 50% of the value for PURE active!   How could this be?   How could a 5 ppm solution in water generate this much airborne active ingredient?   I had to know so I booked a trip ASAP  to visit the site, to see things for myself and repeat the monitoring.

The highest sampled airborne concentration occurred over an open sump that contained about 5000 gals of the treated water.   As soon as I saw the sump, the whole thing started to made sense.   On top of the liquid surface was a 1-2 inch layer of FOAM.   Apparently some part of the plant process resulted in a foaming agent going into the water and the agitation within the process produced the foam.  

Because foam is typically much more "oily" than water, our active was partitioning into the foaming agent and thus into the foam.   Subsequent analysis of the foam showed that it had very high concentrations of our active ingredient.   As the form bubbles “popped” they released aerosol particles to the air that were rich in our active which further concentrated as the water and any other more volatile components evaporated from the aerosol particles because of their high surface area to volume ratio.   The entire mix of aerosol and active vapor produced the high breathing and irritating breathing zone concentrations above the sump.

If there was no foam there was no concentrating of the active ingredient into the foam and no high levels of airborne exposure.   The reality from a product stewardship perspective became clear to me; namely, that when foam was present in some systems that these scenarios where clearly capable of producing unacceptably high airborne concentrations.  

For me, the moral of this story is twofold.  First, one should always have some level of “ground truthing” for the predictions of a model.   That is, do some monitoring.   Those who model should always be open to monitoring.   They are not two separate camps.   The modeling informs the monitoring and the monitoring “ground truths” the models.   Second, we should always be ready to have Mother Nature throw us a curve ball.  We can predict a lot with models but we should always be open to potential game changers from unanticipated factors. 

Monday, November 11, 2013

Octanol Water Partition Coefficient – What does it Mean for Exposure Assessment?

The octanol/water partition coefficient is determined by a relatively simple test.   It is done by taking roughly equal volumes of octanol (an “oily” alcohol) and water, mixing them together and throwing the chemical of interest into the mix and mixing some more.   We all know that oil and water do not mix and as a result when everything settles down we get an oil layer of octanol on top of the water layer with the chemical of interest dissolved in each.   The ratio of the amount in the oil to the amount that got dissolved in the water is the octanol/water partition coefficient or Kow.   This number indicates whether a chemical is a lipophilic (a “fat lover”) or hydrophilic (a “water lover”).   For example, methanol would go mostly into the water while benzene goes mostly into the octanol.   Most organics are highly lippophillic such that the scale is made to be logarithmic.   Thus, we take the log of the ratio which is called log Kow or pKow.   If the Kow is 1000 the the log Kow or pKow would be 3.    Since only 17% of methanol goes into the octanol the Kow ratio is 0.17 and the pKow is -0.77.   See the previous blog on logarithms (A Simple Refresher of Logarithms, July 2013) if this is at all unclear.

The pKow of benzene is 2.13 which mean it really likes fats and generally dislikes water.  What does this all mean for those of us in the exposure assessment business.   Well first of all it tells us roughly where benzene or any other fat loving chemical will wind up in the environment.   It will tend to wind up in environmental compartments that are rich in organic constituents like some soils and most sediment.   If inhaled or ingested, it will tend to accumulate in areas of the body that have a lot of blood flow and a lot of fat. 

In general, chemicals with a high pKow tend to bioaccumulate or biocentrate in organisms within the environment.   That is, they concentrate in the tissues of animals -as you go up the food chain.  DDT was a prime example of this in the case of the Bald Eagle.  Since we are at the top of the food chain this could also be problematic for us.

Finally, chemicals with high pKow tend to be absorbed by the skin much more readily that those with low pKow.   Look at the following equation for dermal adsorption from Dr. Wil tenBerge’s web site:
From this it appears that pKow and molecular weight drive the dermal absorption of chemicals.

Of course, the higher the pKow, the lower the water solubility and this is an important considerations in some exposure assessments.   One can get faked out with this fact which will be the subject of my next blog and how I received what was perhaps my greatest surprise ever to date in doing an exposure assessment.

Monday, November 4, 2013

The Promise of REACh to Exposure and Risk Assessment

REACh could be the most impactful piece of legislation in the history of risk assessment.  From my perspective, the jury is still out.

Below are excerpts from a chapter I co-authored with Ron Pearson and Susan Arnold that currently appears in the 2nd Editions of the AIHA book in Mathematical Modeling for Estimating Occupational Exposure to chemicals.  The chapter is entitled: REACh – A New and Important Reason to Learn Modeling

In the 1990s at meetings of the European Union’s (EU) Council of Environment Ministers, concerns were raised that European chemical policies did NOT provide enough protection to chemical product users. An evaluation of existing rules revealed that new substances were heavily regulated, but they made up only a tiny fraction of the total chemicals in commerce. On the other hand, existing substances, which made up the vast majority of the total volume of chemicals in use, were essentially unregulated. Out of this, REACh was conceived[1],[2]

This lack of risk assessment attention for existing chemicals has also been recognized on this side of the Atlantic.  It has been explicitly noted by the US EPA Board of Scientific Councilors that any comprehensive assessment of human consumer exposure to chemicals has not occurred and that the vast majority of exposures have not been systematically and proactively addressed[3].   Indeed, it is reasonably well established at this point that the risks of most types of consumer chemical exposures in modern society have not been assessed.

The types of exposure mentioned here are the exposures to chemicals that result predominately from residential sources.  Because of the leadership of the European Union and the development of REACh, we are on the cusp of a change in which regulatory mandates playing out in the rest of the world are literally driving the overall scientific development of human health exposure assessment for the multitude of common and relatively unstudied substances to which humans are exposed.

What then is REACh all about at its base level and what does it mean to the Industrial Hygienist?

REACh stands for Registration, Evaluation, Authorisation and restriction of Chemical Substances.  It is a complex piece of rulemaking that lawmakers have been hammering out in Europe for some time.  REACh became the law of the land in the EU on June 1, 2007.  Its primary purpose is to evaluate and control the human health risk from chemicals. It places the burden of proof for chemical safety on manufacturers, removing it from regulatory authorities.  Because of the complexity and broad scope of REACh, it is driving the need to screen and assess many chemicals and substances in a relatively short (but rather immediate) time-frame
So it is thus clear that REACh was conceived in Europe to regulate the majority of chemicals that exist in commerce.  What then is the upshot of REACh relative to industrial hygiene?  Professionals anticipate REACh will actually spur innovations and opportunities in estimating exposures to consumer products, which would positively affect the field of IH. Toxicologists are also facing the same issues as the second half of any risk assessment team. 

First Comes Assessment and Modeling

If  REACh requires a comprehensive and scientifically valid and rational evaluation of human exposure to substances, then modeling is an indispensable element of that assessment.   Since it has been established that this level of comprehensive assessment of human exposure to chemicals has not occurred and that the vast majority of exposures have not been systematically and proactively addressed, a considerable amount of scientific model development needs to occur.  For example, personal chemical exposures to consumers from predominately residential sources have been ignored by most chemical modelers.  Given this commitment within REACh to assessment, exposure model utilization and development should come to the forefront[4].
Exposure models are critical to any commitment for comprehensive exposure assessment because we will never be able to monitor or measure every exposure everywhere.  The need for models is particularly acute with REACh because it increases proportionally with the growing universe of chemicals under consideration. Also, as a technical expert, an industrial hygienist has a critical need for objective and rational scientific tools for analysis.  These facts necessitate both continuing education in and the ongoing scientific development of the discipline. 
Specific research needs relative to exposure model development and validation have been outlined and presented[5]; unfortunately, to date the authors are unaware of any concerted and coordinated effort to follow up on these recommendations. (Jayjock Note:  This document and web site can no longer be found online.  I will put it on my web site but in the meantime, please send me an email at and I will send you a copy).
Stripping away all the complexity, the “bottom line” for those responsible for conducting risk assessments under REACh is to provide enough quality information about any substance’s exposure pathway(s) and toxic effects in the real world to pass the “red face” test of accountability.   The size of the challenge is clearly daunting and the need for tool development is critical. 
Jayjock Comment:   This chapter was written about 5 years ago and I must say from my perspective, I have not seen a ground swell of data or researched and developed modeling tools (e.g., source characterization) coming forth.   The economic and political tides in all of this continue to shift but we can only hope that the promised progress will ultimately be forthcoming.    

 (Accessed October 28, 2013)
[2]  (Accessed January 9, 2008)
[3]    USEPA: Human Health Research Program Review: A Report of the US EPA Science Advisory Board,  Final Report of the Subcommittee on Human Health, EPA Office of Research and Development , May 18, 2005 revised July 18, 2006,  [available ONLINE] (Accessed October 28, 2013)
[4]    Jayjock, M.A, C.F. Chaisson, S. Arnold and E.J. Dederick, Modeling framework for human exposure assessment, Journal of Exposure Science and Environmental Epidemiology (2007) 17, S81–S89.

[5]     Kephalopoulos, S, A. Arvanitis, M.A. Jayjock (Eds):  Global CEM Net Report of the Workshop no. 2 on “Source Characterization, Transport and Fate”, Intra (Italy), 20-21 June 2005.  Available online:   (Last accessed January 9, 2008).

Monday, October 28, 2013

Why Isn’t Risk Assessment Done on All Chemicals?

I have recently been asked why are there so few occupational exposure limits compared to the number of chemicals in commerce?   There are a number of potential answers to this but I think the simplest and most accurate comeback is that there is a lack of data.   So that begs the question as to why there is so little data?

We all probably know by now that Human Health Risk Assessment is the integration of the ability of a chemical to cause harm to human health with the actual exposure to that chemical that might occur in the real world.   In terms of a simple conceptual model:   Risk = (Toxicological Harm/Exposure)X(Exposure).    EVERY chemical will cause toxicological harm at some dose.   Pure oxygen breathed for an extended period is toxic.  Indeed Wikipedia reports that pulmonary and ocular toxicity result from extended exposures to elevated oxygen levels at normal pressure.   Given a dose high enough, toxicological harm comes from each and EVERY chemical we can think of.  The “what” of that untoward health outcome is called the HAZARD IDENTIFICATION for that substance. In the above example with oxygen it is pulmonary and ocular toxicity.  It is usually the first bad thing health-wise that happens as you ramp up the exposure level and it can range from irritation to more serious outcomes like cancer to death.   The exposure level over which that bad health effect happens defines the POTENCY of the chemical’s toxicological effect with highly potent materials causing effects at very low exposures.   Oxygen is not very potent.  By comparison benzene is very potent in its ability to cause toxicological harm.

The point of all this is that you cannot begin to do a risk assessment without data on the toxicological properties (HAZARD IDENTIFICATION and POTENCY) of the chemical of interest.  If you have No data you have No risk assessment unless you really force the issue which will be discussed below.

Unfortunately, what has sparked a lot of the toxicological data that we have comes from first seeing the adverse health effect of the chemicals on exposed humans.   Benzene falls into this category.   So does asbestos and vinyl chloride and all the other known human carcinogens.   Seeing people stricken by these substances caused them to be studied.   The other category of well-studied chemicals is pesticides.  This is primarily because they are designed to be commercial poisons so they are expected to be relatively potent toxicants and clearly worthy of study.   As a result they are also highly regulated.  How do we address the rest of the chemicals in our world?

In the late 1990s the Environmental Defense Fund issued a ground breaking report entitled:  Toxic Ignorance (The Continuing Absence of Basic Health Testing for Top-Selling Chemicals in the United States):    It proved with undeniable hard data that, at that point in time, “…even the most basic toxicity testing results cannot be found in the public record for nearly 75% of the top volume chemicals in commercial use.”   As you might imagine it caused quite a stir and the EPA got involved and eventually hatched the High Production Volume (HPV) Challenge  program.   This resulted in considerably more toxicological data but as you might guess there remains a severe lack of data for the tens of thousands of chemicals still in commerce to which folks are exposed every day.

But that takes us to an even more fundamental question:  Why haven’t we been testing the chemicals that we breathe, eat and touch in our environment to determine their toxicological effects all along?   Why has it taken the cold hand of public scrutiny and regulation to get things moving?  I think one of  the strongest factors addressing this question is the misguided presumption of safety by those with an economic stake in these chemicals.   Many believe at some level that “no proof of risk means proof of no risk.”  This is, of course, not true; however, for these folks there is no incentive to go “looking for trouble” by testing the chemicals to identify the hazard they pose and the potency of that hazard.   Toxicology testing is, or has been, considered to be relatively expensive to do so.   Thus, the reasoning goes, why spend money to give you bad news when you assume it’s safe and the testing would only bring bad news?

There is another large factor in this and that is the mistrust of toxicological data.   Those who do not like the results point to the high doses used in the toxicological studies and assert that they do not relate to or represent the exposures received in the real world and are therefore unrealistic measures of risk.   We will get into these issues (dose-response extrapolation and setting exposure limits) in future blogs but I think you can see how the politics might be playing out in all this to have a strong bias towards not doing toxicological testing.

So at the end of 2013 we are left with lots and lots of chemicals but relatively little toxicology data to characterize the hazard and potency of these tens of thousands of substances in commerce.   My sense, however, is that as Bob Dylan once sang “The times they are a changin…”   The European REACh statute is essentially driving risk assessment by forcing risk assessment and the assignment of exposure limits based on very prescriptive procedures.  I will go over REACh in a future blog because I believe it has the capacity, given the political will, to ultimately drive the science of risk assessment.   This could conceivably force much more testing of both toxicity and exposure but that remains to be seen.
On another front the EPA is spending considerable money (tens and perhaps hundreds of millions of USD) and resources in advanced toxicological screening using innovative molecular biology techniques in a program entitled ToxCast:     

More confident data will begin to feed the risk assessment process which should ultimately the lower uncertainty of the analyses.  Lower uncertainty will, in turn, lower the forced overestimation of risk that comes from doing a risk assessment with a lack of data.  Indeed, in the end this confident knowledge will be much more cost-effective in focusing on real risk and not spending money on phantom risk borne of uncertainty. 

Monday, October 21, 2013

Career Advantages of Being a Modeler

It occurred to me recently that I have not given you all the really good reasons why you should be dedicated to learning modeling. I will attempt to do so herein. The reasons are listed and further explained below:

  1.      It’s cool!
  2.  It will really help you in your job as an industrial hygienist or exposure assessor
  3.  You will become a relatively rare and sought after animal
  4.  It could dramatically increase your value to your employer

It’s Cool:
Huey Lewis once sang that “its hip to be square!”   If you have any feel for science at all, understanding or looking into the workings of reality can be a real rush.  The Science Channel counts on this human trait to gather viewers.  Indeed, seeking and organizing the factors that drive human exposure in any scenario is part of being human and many of us find it to be simply fun.   Let’s face it, we are curious animals that love to be competent and develop tools (i.e., models)  and acting on that curiosity is an end in itself for many of us.

It will really help you in your job as an industrial hygienist or exposure assessor:
Modeling will inform your judgment as to where the significant exposures might be whether they occur in the present, happened in the past or have not occurred as yet.   It will allow you to estimate the exposure potential of scenarios literally on the other side of the globe.   I should also ultimately mean you will most likely waste less time monitoring some exposure scenarios  that do not need measuring while focusing on other that do.  Properly done, skill in modeling could in the long run mean less direct monitoring and more effort put into characterizing what exactly is causing the potential over-exposures. 

You will become a relatively rare and sought after animal:
Thanks to the efforts of my colleagues within the AIHA there are quite a few more modelers out there in the Industrial Hygiene Community than there were 20 years ago when we started beating the drum but there are frankly still relatively few.   It is not the sort of discipline that you pick up very quickly and there are very few places to actually learn it.   The 2 day AIHA Professional Development Course is probably the best but it is very intense and, while Rome was not built in a day, it is even harder to “make a modeler” in two days.    Indeed, there are quite a few reasons that there remains a relative lack of folks that are reasonably skilled in human exposure modeling.   I outline this situation in detail in the following document:    

Those of you that read this short Word document will find that it is an “offer of service” to clients to take the time and attention needed to actually train professionals on-the-job in modeling to have them become fully functional.   The offer has been out for a while and I have yet to have any takers.   If I get a lot of response to this particular blog I may reproduce it in a future blog.   Frankly, it walks the line between service to you and your management and self-promotion but I am willing to take that chance to get the word out.
The fact remains that there are very few places to get this training and that if you take the time to do so you will be a rare, and valuable, exception.    You will no longer be someone who just measures and interprets exposures. You will be a technologist that predicts exposures and understands and can explain the scientific basis for that judgment.  That skill is worth something as the next point stresses.

It could dramatically increase your value to your employer:
I tell you truthfully, that being able to model exposures (and dose-response) made my career at the Rohm and Haas Company.   The skill was responsible for at least 3 promotions within that company.   Using models, I was able to predict exposures with just a modicum of data and the invocation of assumptions.   I could explain and justify those predictions based on first principle (and review-able) science and the managers just loved it.   Over 80% of my work was rendering these informed and explained technical opinions regarding the relative safety of products.   When the margins of safety were high enough, it gave them the confident knowledge they needed to proceed.   When the margins were not adequate, it gave me the necessary arguments (and support from my management) to obtain more information and data to reduce the uncertainty and usually the predicted level of risk.

Bottom Line:  Becoming skilled at modeling is not an easy or a short road but it’s the road less traveled and it could offer tremendous benefits to you and your career. 

Friday, October 18, 2013

Exposure Modeling Course at the SRA in Baltimore December 8 2013


This is your chance to learn Monte Carlo Uncertainty analysis modeling in a one day course from an excellent teacher.   Tom Armstrong and I have previously put together and taught a half day course on human health exposure modeling here in the Philadelphia area.   Tom asked if I would be interested in teaching a one day course at the SRA in Baltimore this year.  I told him that I would but that there would have to be enough student's attending to make it worthwhile for him or I to do so.   Tom agreed to put the course together and teach it by himself if a minimal number of students signed up and to have me come on-board if the student count became high enough.   Well it looks like there are enough folks signing up for Tom to give the course.   Tom is an excellent modeler and teacher and you will get your (or your company's) money worth if you attend this class.   If considerably more of you sign up in the next few weeks, I will be there as well - doing what I really love - teaching and getting paid for it.

The course is on Sunday Dec 8.

Below is mostly cut and paste from the SRA:

SRA has arranged for a special rate of $164 a night at the Hilton Baltimore. The Hilton is located in Baltimore’s Inner Harbor district and is only 15 minutes from BWI airport. Transportation from BWI Airport is available via the Light Rail System for $1.60 each way. The Hilton is located adjacent to the Convention Center station. Guests can enjoy the hotel’s prime location next to Oriole Park at Camden Yards, M&T Stadium, home of the Baltimore Ravens; and the University of Maryland Medical Center.

Please use this link for the discounted SRA rate:

Workshops will take place on Sunday, December 8, and Thursday, December 12 at the Hilton.  Listed below are the available workshops.  Go to to see all of the workshop descriptions.

If that link does not work, try this one:

I understand that you can sign up and pay for the workshop without registering for the conference.   I also hear that there is a significant discount rate for folks who are full time students that want to take this course.