LinkedIn

Monday, May 26, 2014

Dermal Exposure: The Almost Forgotten Route of Exposure for Workers


One of my favorite sayings is:  “If you only have a hammer then you tend to view all of your problems as nails.”   I cannot help but think of the applicability of this saying to the past and, to an extent, the current practice of Industrial Hygiene.    It speaks to the need of our profession to broaden our perspective and available tools.

I admit that my training as an IH was many years ago but during that period I received essentially no education in the recognition, evaluation and control of exposure and risk from the dermal route of exposure.   I would love to hear from those of you who have had more recent training as to how it is being addressed these days.

Those of you reading this blog for any period know that I have put in a lot of time into talking about Occupational Exposure Limits (OELs).   Of course, I am doing this because OELs represent half of the story of RISK in the equation that Risk = Exposure x 1/OEL.    When the resulting Risk ratio is greater than 1 the risk is unacceptable.  If the ratio is less than 1 it is OK or at least NOT unacceptable.   The workplace exposures that we measure with great skill and care as IHs and exposure assessors have NO contextual meaning without some comparison to a toxicological benchmark.   As Industrial Hygienists our primary source of toxicological benchmarks are the OELs.

Unfortunately, almost all the OELs that we deal with are one dimensional.  In a practical sense the only OELs of any significance that most of us use are expressions of AIRBORNE concentrations of the chemical of interest in the breathing zone of the person receiving the exposure.   This focuses us on the inhalation route of exposure while we are left somewhat clueless as to how we might handle and similarly evaluate other routes.

From my experience the most important “other route” of exposure in the work place is dermal.  This route of exposure is often dominant in industrial and commercial scenarios for chemical compounds with relatively low volatility.    For example, I recall a study in which workers sorting lumber freshly treated and wet with pentachlorophenol got well over 80% of their total exposure via the dermal route as evidenced by the comparison of air and biological monitoring.   

One can roughly convert an airborne OEL to a dermal OEL using the following assumptions:
  •         100% systemic absorption via inhalation
  •         10m3 inhaled air per 8 hours
  •         70 kg worker


Let us take toluene at  8 hour OEL of 50 ppmv as a working example.   That is (92.1/24.4)(50ppmv) = 189 mg/m3
(189 mg/m3)(10 m3/day)/70 kg  = 27 mg/((kg)(day))

This is our dermal (actually an overall systemic) OEL expressed as a traditional toxicological dose metric.

Now all we need to do is figure out two things:
  1. How much of the stuff goes onto your skin
  2. How much then goes through your skin to the systemic circulation (blood and lymph).  

For the first task we can use work done at the EPA years ago that indicates when you splash or immerse your skin in water or aqueous solutions it is retained at about 3-4 mg/cm2 per splash or dip.   If you do it with light oil it goes up to about 15 mg/cm2.   The EPA’s Exposure Factors Handbook (http://www.epa.gov/ncea/efh/pdfs/efh-complete.pdf ) goes into this in more detail with better documentation.  This reference also shows you the surface area of various body parts.

Just as an aside, it should be no surprise that the vast majority of dermal exposure from chemicals that are “handled” goes to the hands and lower arms.   We measured this value at about 90% a few years ago when we tested wall painters and this is irrespective of whether they painted via brush, roller or spray.

Next you need to figure out how much of the applied dose goes through the skin into the systemic circulation.  You can assume the worst case that all of it goes through instantaneously and indeed, some regulatory protocols have done this; however, this is a fairly dramatic overestimation of exposure.  Indeed, it probably over estimates exposure by anywhere from 2-100 fold.

A better methodology is to use physical-chemical modeling (what else?).  The best models, in my estimation, have been put together in a user friendly form by Dr. Wil tenBerge and members of the AIHA Exposure Strategies Committee.  https://www.aiha.org/get-involved/VolunteerGroups/Documents/EXPASSVG-IHSkinPerm.pdf

I will discuss some of the details in IHSkinPerm in a future blog but I wanted to make you aware of its existence. 

Another aspect of dermal exposure worth mentioning here is the real possibility of dermal exposure from exposure to vapors in AIR. The practical upstart of this type of exposure is that a respirator might not be fully protective or even sufficient in some scenarios.  As far as I know, only Dr. tenBerge has tackled this estimate and it is also part of IHSkinPerm discussed above.


Given the above, I suggest that it is time for you to consider adding more tools so we can view and deal with our exposure assessment challenges more comprehensively and  more effectively. 

Sunday, May 18, 2014

Modeling “large” evaporating sources (backpressure)


Recent blogs have covered the subject of surprises.   During some of my early work on modeling I got a surprise relative to the calculated airborne saturation concentration of an evaporating source.  Like almost all scientific surprises, this one was very useful and lead to the development of a modified model for estimating airborne concentrations above and around large evaporating sources which I termed the backpressure model.

If you put an evaporating source into a closed vessel and have enough of the material within the vessel, ultimately the saturation concentration (Csat) will be obtained in the headspace air volume of the vessel.   Csat is the highest concentration of vapors possible from the evaporating source.   Csat is directly related to the vapor pressure as follows:

Csat = (vapor pressure/atmospheric pressure)(1,000,000) 
The units of Csat are ppm volume/volume or ppmv.

Now take a relatively small sample of the material of interest and put it on a watch-glass on an open scale and allow it to evaporate so that you can measure its evaporation rate in mg/hour.   This should equal the generation rate of an evaporating source (G).

We know from previous work that the concentration in any volume with a constant generation rate (G) and ventilation rate (Q) is:

Ceq = G/Q


Remember our watch glass experiment above.   We now have a value for G in mg/hr.    Say the evaporating surface area was about 5 cm2.    So now normalize this rate per unit area so we can express this G rate as mg/((cm2)(hr)).  

So we used this G to estimate the Ceq  (=G/Q) of a room with a large (10 m2) spill of toluene in a reasonably well ventilated room.   We calculate G by multiplying our experimental G rate per cm2 by the area of the spill which is 100,000 square centimeters. 

When we calculate Ceq we get a value that is MUCH larger than Csat which is physically impossible!  Surprise surprise!

What is going on here?   The answer is backpressure.   The entire driving force for evaporation is the diffusion of the molecules from high concentration in and immediately above an evaporating liquid and the relatively low concentration of the same molecules in the receiving air volume.   When we release relatively few molecules from the source compared to the receiving volume, this driving force is maintained and G is constant.   However, when the receiving volume starts to contain a built up concentration of the evaporating molecules the driving force is diminished and the evaporation rate (G) decreases. 

Think of it this way.   In the desert with very low humidity in the air, wet clothes dry very quickly.   In more humid environs when the relatively humidity is 50% clothes dry only half as fast and at 100% humidity they do dry at all. 

In a large spill the initial generation rate (G) is maximized because there are no molecules of the evaporating liquid in the receiving air; however, relatively soon thereafter the molecules build up in the air and begin to retard the generation rate (G).   Thus, the generation rate is not a constant but is variable with time:

              G = Gmax (1-C/Csat)

Gmax is the initial generation rate at the beginning (t = 0)
C is the concentration in the receiving volume (which is a variable function of time until it reaches Ceq)
Csat is defined above

If C in any volumne ever gets to the point where it equals Csat  (Ceq = Csat), as it does in a closed vessel, then G becomes equal to zero.

Backpressure is always present over evaporating or vaporizing sources even in ventilated volumes; however, we usually do not have to account for it because its effect is relatively small when we have evaporating sources with small area-to-receiving volume ratios.   When it becomes important is when we have LARGE evaporating sources such as a large area spill indoors or large vaporizing sources like off-gassing wall paint or carpets.

There is a module in IH MOD for considering backpressure (The Well-Mixed Room Model with Backpressure).   Because backpressure shows itself mostly in indoor sources with large areas, this is the correct model in which to evaluate its effects.

The original paper on backpressure has a lot more detail and I would be happy to send it to whomever is interested and writes to me at mjayjock@gmail.com


Monday, May 12, 2014

How we learn from Surprises – Part II


We owe an awful lot to the scientific method.   It is a remarkably simple concept but it is profoundly important to the technological progress of our society.   My admitted simple view of the method is below:

1. State the problem or premise
2. Form a hypothesis
3. Experiment and observe
4. Interpret the data
      If you are really surprised go back to #1
5. Draw conclusions and make predictions

This is essentially what we did when we tested our new wallboard room chamber for ventilation rate with the hypothesis that it would be quite low versus the normal infiltration rate for residences of about 0.4 mixing air changes per hour.   We got a real surprise in step 4 when it was over 10 times higher than expected and we started over at step 1. This entire story is presented in a previous blog.  In this blog I want to discuss another Step 4 surprise.

We were working on an exposure assessment of a wood preservative that was used to preserve wood that was being used indoors.  We needed to understand the rate of off gassing of this preservative so that we would estimate the indoor airborne concentrations in various scenarios. 

We knew that absorbent materials within typical rooms (carpets, furniture, etc.)  would act as reservoirs or “sinks” so we decided that we wanted to measure the “pure” rate of off gassing by putting a piece of treated wood into a glass chamber, ventilating the chamber and measuring the concentration in the exhaust air over time.    We originally modeled this system with the following conceptual picture of what was occurring.   First the preservative would off gas from the wood. It would then go to the air where some of it would be deposited on the glass surface and some would be exhausted from the glass chamber.  Eventually the glass “sink” would be filled and the entire system would be in equilibrium.   We assumed that the preservative was chemically stable and would not degrade in the time frame of this experiment.

We were wrong.  We could not get the model to work without putting in a degradation term for the material on the interior glass surface.   Once we did this the numbers worked out.    Declaring that there was significant chemical degradation on the glass was not sufficient.  We needed to prove it.  We did so with another experiment where we deposited a known amount of preservative on the glass, proved that we could get 100% of if back at time equal zero and then measured its degradation with time.   The sorted details of all this are provided in a 1995 paper which I will be happy to send to whomever asks me for it a mjayjock@gmail.com.

All this harkens back to the wise quote:  "All Models are Wrong but some are Useful."  This surprise was incredibly useful to us and lead to a much better understanding of the fate and ultimate exposure potential of this product indoors.  As it turns out, it was somewhat reactive with normal ambient oxygen in our atmosphere but, as you might imagine, it is very reactive to even the low levels of tropospheric ozone that can make its way into our indoor air especially through open windows and doors.

Treasure your surprises.
  




Monday, May 5, 2014

Total Quality Leadership (TQL) and Risk Assessment


More than a few years ago the Total Quality Leadership or TQL process was all the rage in management circles.  I was working for a large chemical company and we decided to apply it to Human Health Risk Assessment.   

If you of a certain age, you may remember that most large companies got onboard with TQL and TQM (Total Quality Management) relative to its application to their manufacturing and sales processes; however, we saw lots of parallels to what we were doing in Risk Assessment .    Indeed, it would appear that any situation involving a service to a client could benefit from the principles of this management technique.

For those of you not familiar with TQL it is defined as the outline or explication of a process that embodies the touchstones of operational efficiency and customer satisfaction.   Its primary elements include client identification along with enhanced and explicit client interaction and communication, measurement of client satisfaction, and a drive for constant improvement.

In order to do this for the company I worked for at the time, we organized a team that met every few weeks for about a year to discuss, define and elucidate the elements and boundaries of the Risk Assessment Process. 

You think you know what you are doing but it is really surprising how much you can learn about your “business” by going through this type of process.    In really picking what we do apart we came up with a graphic entitled:  Top-down flowchart of the RA Process which is reproduced below.   Some of the revelations, for me, on all this:
  •  As Exposure/Risk Assessors we have clients and we have charges.   The client is the person or persons paying us for our service and the charges are the folks on the receiving end of the exposures and risks that we estimate.   Of course, we have a considerable level of responsibility to each.
  •  A critical step in any risk assessment is the decision to actually do a risk assessment and that decision typically belongs to the client.
  • The risk assessor does own the decision as to whether enough resources are available to do what is being asked.  If not, he or she has the right and responsibility to kick the proposition outside the boundaries of  the RA Process and give it back to the client for his or her decision to provide those resources or not.
Of course, a lot more detail than this came out of this effort and it is all summed up in a paper we published in 1997.   I would be happy to provide a PDF copy of that paper to anyone requesting it from my at mjayjock@gmail.com.

I will leave you with some summary statements from this work:


The client or customer-based orientation of the above TQL process was found to provide a vital and beneficial mix with the scientific and investigative elements of risk assessment.  Particularly valuable is the elucidation and negotiation of the contract elements between assessor and client early in the process.  This explicit discussion allows the assessor and the client to set appropriate and explicit expectations for the partnership.  Also, the iterative checking with clients relative to the match of the work product with their needs lowers  the incidence of surprises and the occurrence of relatively long and costly periods of unproductive work.   The authors believe that this scheme for RA management could act as a general guide for many, if not most, consultant/client interactions in this discipline whether they are external or internal to a corporation.