LinkedIn

Sunday, September 28, 2014

Why Model Exposures when you can Measure Them?


The blog this week is a reworded excerpt from what I believe may be the best book an IH can own; that is, Mathematical Models for Estimating Occupational Exposure to Chemicals, 2nd Ed, AIHA Press.

Many Industrial Hygienists are faced with at least scores of exposure scenarios in which workers face exposures to many different compounds in a lot of different circumstances.   It is probably safe to say that under the current operating system the exposure associated with a majority of tasks are never monitored because the industrial hygienist judges them to be safe.   Indeed, John Mulhausen has taken the lead in discussing, disclosing and popularizing the fact that the most common number of samples collected to make this determination is zero.   Indeed, most hygienists are not required to present a formalized or systematic analysis to support these decisions.  That is, he or she has observed the universe of scenarios and has, for the most part and probably correctly, concluded that the exposure limit is not exceeded.  When asked how that determination was arrived at, the typical answer is that he or she applied expert judgment. That is, the occupational hygienist uses his or her combined experience to make this decision. When pressed further, the hygienist might say that it is because the system or scenario under consideration is relatively closed, that the vapor pressure is low, the exposure limit is relatively high, etc.  This combination of factors tell an experienced occupational hygienist that overexposure will not occur.  When professional judgment leads the hygienist to believe exposures may approach or exceed the exposure limit, a monitoring plan is put into action. The results of that monitoring determine whether controls are implemented.
I believe that much of exposure assessment in general and industrial hygiene in particular, has been practiced using the reactive, reflective, qualitative, and relatively undefined expert judgment as outlined above.  Within the IH world, this manner and technique of working has generally protected many workers from overexposure and subsequent adverse health effects.  However, it has a number of serious flaws, including:
(1)  It is difficult or impossible to explain objectively.
(2)  It is typically not supported by explicit quantified facts relating specific cause and effect.
(3)  It is not amenable to technology transfer (i.e., those new to the field find it hard to learn).
(4)  It is often insufficient to provide convincing evidence to affected workers or to defend against litigation or other legal challenges.
Thus, I believe that the standard method of only using direct measurement is clearly not the best we can do as industrial hygienists.  In fact, sometimes, measurements cannot be taken.  Consider the following situations:
  •          You want to monitor exposures, but there is NO method is available
  •          You cannot measure exposures “right now” when they are occurring
  •          You cannot measure exposures because you cannot be present, such as when they happen at another location, they happened previously (retrospective), or they have not happened yet (prospective)
  •          A small sample size of exposure monitoring events leads to a heavy bias toward concluding unacceptable exposures are acceptable
  •          The financial burden associated with technician’s time to collect sample and analytical fees are real-world challenges that restrict monitoring efforts

 As mentioned above the typical IH applies expert judgment relative to the decision to monitor or not.  I submit that they are actually using a subliminal model to make the call that the scenario requires monitoring or it does not.    The entire point of this aforementioned discussion within the book is that rather than simply invoking a claim of poorly or unsubstantiated professional judgment relative to their decisions, the use of explicit and transparent models enables the industrial hygienist to understand, display and defend the scientific rationale behind those decisions.

The book goes into a lot more detail relative to this argument but more important, it provides a wealth of technical information about specific models and modeling techniques.   I and my fellow authors receive no money from the sale of this book but I tell you honestly that I believe that no IH should be without it.

Questions for Discussion:
  1. Of those of you who have it, what do you think of this book and how could we make it better?
  2. What modeling problems have you tried to apply the material in the book to solve and how did that work for you?





Saturday, September 20, 2014

Dermal Bioavailability Driving Exposure


I am reproducing the cause to effect continuum graphic from a few weeks ago because it is going to help us understand a point about this week’s blog; namely, after contact a material has to be absorbed within the systemic circulation of the body in order to exert a toxic effect.
I have mostly worked in the realm of bioavailability of chemicals via dermal exposure.   This was a natural result of my employment by a company that made a Type IV contact allergen that was used as an effective biocide but needed to control the exposure so that the risk of contact allergy to users was likely to be deemed to be “not unacceptable”. 

You may find the double negative term “not unacceptable” to be strange; however, I use it for a reason.   My sense is that acceptability of risk is very much a political determination.    As such it is a somewhat subjective judgment made by the body politic.   Indeed, I see it as part of the democratic process that is more of less open to those making and being subject to the decisions of that process.   I see that piece as more management and less science.   I see our job as risk assessors to do the best we can to estimate the risk and, when forced to do so, provide levels of exposure that we believe may not be unacceptable to the body politic.   Indeed,  I see it as analogous to the null hypothesis in statistics.  One does not accept it, one simply fails to reject it at some level of confidence.   

OK, so much for philosophy, the purpose of this blog is to discuss bioavailability via dermal exposure.  During my analysis of bioavailability for product safety I studied two mechanisms that effect or control the amount of a topically applied dose of contact allergen that might make it to the systemic circulation of the body.  Indeed, it is commonly accepted that the contact allergen has to pass through the stratum cornium (SC) or top layer of our skin in order to elicit a contact dermatitis allergic response.

The first mechanism that I thought about, but never got the opportunity to test in the laboratory was the limited bioavailability contact allergens caused by the continuous shedding of our skin.   

The SC layer of our skin is composed of about a dozen or so layers of essentially flat dead cells.   These are shedding from the top as they are replaced from the bottom in a process known as desquamation.   That rate of shedding is about one layer of cells per day.   The “ring” around the bath tub at the end of a bath is composed of some of these dead cells.   Indeed, a lot of house dust is also composed of these skin cells constantly being shed from our body.  

Contact allergens are reactive and theoretically bind with protein to form an immunologically active species that goes on to cause an allergic response.   As it turns out even dead cells in the SC layer have protein bonds that can theoretically react and bind with the allergen.   It is my opinion and hypothesis that this binding essentially detoxifies the allergen because it is not mobile (being locked into the desiccated dead SC cell) and the cell is making its way to the surface to be shed.   When the rate of allergen application overwhelms this process, the allergen makes it past the SC and into the viable dermis to do its thing; however, if the rate of application of contact allergen is low enough it cannot “swim” again the up-welling current of SC cells fast enough to reach the viable dermis.   In this hypothesis we have a natural and demonstrable threshold of toxic effect for dermally applied contact allergens.  

I know that the concept of any threshold for allergic response will spawn some comment from those who do not believe they exist for allergens but I think the above logic is pretty solid and certainly testable as a hypothesis.   It simply has not been tested as far as I can determine.

The other mechanism of bioavailability for allergens in products comes from the partitioning of allergens in products with polymeric aqueous emulsions.    Depending on how lipophilic the allergen is, it will partition more or less into the polymeric emulsion particles, leaving significantly less of the antigen to  go into and through the SC. 


We did get to study this second mechanism in the laboratory and wrote a paper about it.   I would be happy to send a copy of this paper to whomever requests it from me at mjayjock@gmail.com.

Monday, September 15, 2014

Acute Effects of Carcinogens

We tend to think of chemicals that cause cancer as doing so over a relatively extended period of time.

Indeed, the exposure metric used for evaluating this risk by the EPA is often the Lifetime Average Daily Dose (LADD).   Specifically, we are told in an EPA document on the subject of pesticide risk assessment:  http://www.epa.gov/scipoly/sap/meetings/1998/march/chapd-2.pdf  that:

“…If the endpoint is cancer, the [Average Daily Dose] ADD must be amortized over the duration of a lifetime using the calculation for a [Lifetime Average Daily Dose](LADD)  . An equation that can be used to calculate LADD values is presented below:

LADD = ADD * (F/365) * (ED/LT)) 
Where:
  • LADD = ADD amortized over an individual's lifetime (e.g., mg/kg/day);
  • F = frequency of exposure events or the number of days exposed to the pesticide of concern per annum (days/year);
  • ED = exposure duration throughout a lifetime or the number of years exposed to a specific chemical throughout an individual's lifetime (years); and
  • LT = anticipated lifetime of an individual in the exposed population of interest (years…”

Thought of this way the risk of any particular chemical can be considered as a virtual “cup”  that is filled to various levels of the carcinogen of interest as one goes through life.  The more the cup is filled the greater the chance of getting cancer from the exposure.  

This approach essentially ignores any dose rate effect for carcinogenicity which may or may not be correct; however, the point of this blog is to highlight the fact that at least some carcinogens may exhibit adverse health effects, from acute or even bolus exposure that may or may not relate to their ability to cause cancer in animals or humans as part of an LADD.  

A case-in-point that recently came to my attention is dimethyl and diethyl nitrosamines. 

Everyone knows and readily accepts that these compounds are powerful liver carcinogens in laboratory animals.  Indeed, many or most consider them to be putative human carcinogens and their risk is managed for the most part on this basis; however, little has been known or studied relative to their ability to cause acute injury.   

In November of 1976 the EPA published a comprehensive (228  page) report entitled:  Scientific and Technical Assessment Report on Nitrosamines.  EPA-600/6-77-001.  The entire report is available online as a PDF imagine, just put the EPA report number into Google.  Among many other things, the report addresses the issue of acute toxicity of nitrosamines. To quote this report:  “The potency of N-nitroso compounds in causing acute tissue injury and death varies considerably (Table 3-1).”    Table 3-1 clearly shows that dimethyl and diethyl nitrosamine are considered the most reactive compounds in the nitrosamine series and to quote the report, as the most “reactive compounds produce hemorrhagic destructive lesions at the site of contact…” [emphasis added]   The report goes on further in the same paragraph: “Spills have led to irritation of the eyes, lings and skin.”

The report also provides some insight as to why acute toxicity has not been studied much by stating that the hepato-carcinogenicity of these compounds is so striking that the study of the acute effect(s) has been essentially ignored.  From the above however it appears that acute contact site toxicity of the respiratory tract for inhalation and the skin for dermal exposure are likely.


This blog has covered various aspects of bolus exposures in the recent past.   The lesson from the above information on nitrosamine is clear; if you are dealing with bolus exposure to putative carcinogens you need to address their acute as well as their chronic toxicity potential.

Monday, September 8, 2014

The Hill Criteria for Establishing Causation


Last week’s blog discussed the continuum of cause and effect from sources to adverse health effect from human exposure to chemicals.  The focus of that piece was to explore the situation where you have an adverse health effect in the work place but no understanding or even a signal of any of the elements that preceded it in the above line of cause and effect.   Within that situation, we are first charged with confirming that the adverse reaction(s) are real and caused by a workplace exposure.   Once we do that we are absolutely on the hook to figure out what is happening in order to determine a fix. 

This first step, that is determining that the effect is real and work-related, is critical and very often not obvious.   This week’s blog talks about making that connection.

My friend and colleague, Dr. Andy Maier, recently taught me about what are commonly called Hill’s Criteria of Causation [Hill 1965].  These are the minimal conditions needed to establish a causal relationship between potential disease agents and human diseases. They were originally presented by Sir Austin Bradford Hill (1897-1991), British Professor Emeritus of Medical Statistics of the University of London, as a way to determine the causal link between a specific factor and a disease.  I am not an epidemiologist but I now understand that Hill’s Criteria form the basis of modern epidemiological research and have been used in epidemiological science for sixty years.  

Clearly, the Hill criteria can also be used to establish causation in the above situation; that is, we have an adverse health effect in a worker or workers that we believe is linked to their work place exposure but need to establish (or deny) that link.   The individual Hill Criteria are listed below with a brief explanation for each:


TEMPORAL RELATIONSHIP

Exposure always precedes the outcome.  Note:  This is perhaps the most important of the criteria.


STRENGTH

This is defined by the size of the association as measured by appropriate statistical tests.  I think we all know that, strictly speaking, correlation is not causation; however, the stronger the association, the more likely it is that the relation of "A" to "B" is causal.

DOSE-RESPONSE RELATIONSHIP

An increasing amount of exposure to the putative cause increases the risk. If a dose-response relationship is present, it is strong evidence for a causal relationship.   


CONSISTENCY

The association is consistent when results are replicated in studies in different settings using different methods.  That is, if a relationship is causal, we would expect to find it consistently in different studies and among different populations.  If more than one worker experiences the same effect in a similar manner in similar circumstances.

PLAUSIBILITY

The association agrees with currently accepted scientific understanding of pathological processes. In other words, there needs to be some rational and theoretical basis for positing an association between a vector and disease.


CONSIDERATION OF ALTERNATE EXPLANATIONS:

In judging whether a reported association is causal, it is necessary to determine the extent to which you have taken other possible explanations into account and have effectively ruled out such alternate explanations. In other words, it is always necessary to consider multiple hypotheses before making conclusions about the causal relationship between any two items under investigation. 

EXPERIMENT

The condition can be altered or fixed (i.e., prevented or ameliorated) by an appropriate experimental regimen. 

SPECIFICITY

This is established when a single putative cause produces a specific effect.

COHERENCE

The association should be compatible with existing theory and knowledge. In other words, it is necessary to evaluate claims of causality within the context of the current state of scientific and technical knowledge.

Ref: Hill, A. B. 1965. “The Environment and Disease: Association or Causation?” Proceedings of the Royal Society of Medicine, Section of Occupational Medicine 58, 295 – 300.


As mentioned above, if you use the above criteria and determine that the untoward worker health effect is indeed related to their exposure you are then committed to hunt it down with an eye toward control.  Use the judgmental discussion of causality to form other hypotheses to reveal the driving determinants of this exposure and then test them.

Monday, September 1, 2014

Risk Assessment as a Cause and Effect Continuum


When you think about it, the risk to human health from chemical exposure can be described by a source-to-dose line of successive cause–and-effect events.   I do not remember all who were involved in the construction of the graphic below or even what the context might have been but I remember I had a part in it.

I also know that I have found the concepts in this particular graphic helpful over the years to gain a big picture perspective of what is happening within the risk assessment process. 

Clearly, Exposure Assessment is definitely to the left of the verticle human interface line.   Indeed, the elements of source, transport and contact are essentially pure exposure assessment.  It is the “power alley” of the exposure assessor and the Industrial Hygienist.   In last week’s blog  I wrote how I have been begging for information on these predecessors of exposure in order to do a better job of modeling exposure especially inhalation exposure to workers.    When we monitor, we go right to the end of this exposure assessment line and roll all of those causes of exposure into one ball as we gauge the actual level of exposure at the human interface.   It is a system that has worked for a long time but the purpose of last week’s blog was to assert that we could do better and develop the science with more data.

The line of demarcation for exposure/toxicology for dermal exposure assessment seems less clear to me than that for inhalation.   Even after “contact” we have issues of retention at the human interface (skin) and rate and bioavailability of the agent of interest from a matrix and through the interface.   There will be a blog here about dermal bioavailability in the next few weeks.   I just wanted to make some of you aware of what I believe is a difference between inhalation and dermal exposure.

The realm to the right of the human interface line belongs to the toxicologists.    Sometimes when they test animals they tend to roll up most of the elements into determining the doses that cause effects (i.e., dose-response).   

This combined approach has been called “dose ‘em, kill ‘em and count ‘em”. In such instances our toxicological colleagues typically do not look at the details of all the elements in between which could help inform a more complete picture of the toxicology especially as it might relate to humans.   There are, of course, toxicological scientist who are looking at the various elements to the right of the line in detail as they relate to the ultimate effect in animals and people.   Some great strides have been made in physiologically based pharmacokinetic (PBPK) modeling but these studies (like exposure model development) are invariably more expensive than the more summary approaches.   More recently, molecular biologist have been looking at the genetic elements of toxicology and these provide even more promise in our understanding a chemical’s toxicology.

However, what I really wanted to talk about in this week’s blog is the situation where you have an adverse health effect in the work place but no understanding or even a signal of any of the elements that preceded it in the above line of cause and effect.    This, of course, is the subject of some dose-reconstruction efforts in the service of epidemiological studies in which an apparent adverse effect appears to have happened and the source is sought. 

Beyond the realm of retrospective epidemiology, this situation can sometimes happen in the contemporary workplace.   That is, workers can be showing acute effects of chemical exposure for which the cause has been here-to-fore either unknown or even anticipated.    As Industrial Hygienist we owe it to ourselves to do a number of tasks:

  1.      Confirm that the effects are real
  2.      If real, form hypotheses to help you hunt down the possible causes within the workplace
  3.     Test the  suspected causes first with modeling and then with monitoring
  4.     Depending on your degree of ownership of Risk Management resources, recommend and invoke controls to eliminate the exposure and risk.


If you have not seen or even anticipated an exposure, doing any or all of these steps could be quite challenging because it means the problem is furtive at best else you would have caught before now. My sense is that step 1 is critical. Indeed, working with effected workers and medical staff is critical in determining if the effects are real and a result of the worker’s employment.  Listen to the workers and listen for patterns in their reports.   Most workers are honest witnesses to events and should be taken seriously.

After establishing that you have a problem, albeit a sneaky one, you should start building a hypothesis or two as to what might be happening using everything you know about their symptoms and the workplace.   

For example, it may be that the exposure is happening as bolus doses of very short duration (see previous blog on bolus dosing) and that time-integrated sampling could be missing it.   You may not be monitoring for a potential cause at all because you dismissed it.    It may be occurring via un-monitored and here-to-fore un-noticed dermal exposure.    The point is that you have a problem that has to be addressed.  Once you determine that it is more likely than not that the workers’ adverse effect happened because of exposure at work you are now on the hook to hunt the source of  that exposure.

In the above situation, you are starting at the end of the source-to-dose continuum and working backwards.

Discussion Questions for LinkedIn Groups:

Has this every happened to you?   If so, what turned out to be the cause and the fix?