LinkedIn

Monday, June 30, 2014

Characterizing Airborne Particulate Exposure (particle size)


In the last blog I briefly introduced the general topic of aerosol science as it relates to human health exposure and risk assessment with a focus on insoluble particulate.   The point was made that aerosol particles even if they are identical in composition and come from the same source will typically be comprised of various sizes and sometimes different shapes.   We also saw that the size of any individual particle – expressed as an aerodynamic diameter (AD) – determines its probability of being inhaled and the where in the respiratory track it could be deposited. 

This week’s blog goes into more detail as to how we characterize the particle sizes within any real aerosol. 

Let us consider a relatively fine powder of unit density in which the vast majority of particles are between 150-200 micrometers  AD.    You might not expect this powder to be very “dusty” if you were to suspend it in a workroom air volume.    Indeed, particles this big would tend to settle to the floor within a few seconds.   However, what if the material was being conveyed by pneumatic transport from a large bin to bags?   If the rate of transfer was high enough, even a relatively small percentage of “fines” or particles within the powder much smaller than 100 microns AD  could contribution strongly to a persistent airborne aerosol in the workroom.   Add to this the critical fact that the particles tend to break apart (are friable) on impact with the conveyor walls and with each other and this would dramatically add to the aerosol exposure potential. 

In this case, we need to sample the air to see exactly what is going on with the airborne particulate concentration and the range of sizes. Today we have some remarkable optical devises to assist us; however, in the old days we did this with an Anderson Impactor which involved putting papers or other impacting surfaces on a series of impaction plates in the device with looks like a cylindrical coffee-maker.   We would draw enough air through the unit and the particles would be deposited on the various stages of the impactor depending on their specific range of AD.    One could then measure the relative weight in all the stages and calculate/model an actual distribution for the ADs of all the particles.   If I get more than a few folks asking me to go into the details of this process I will happily present it in a future blog.   For now we will just use the results.

The end result is that we find that the distribution of ADs is well described by a log-normal distribution of the total airborne mass. The parameters of a lognormal aerosol mass distribution are 

1: The mass median aerodynamic diameter (MMAD) 
2.  the Geometric Standard Deviation or GSD.   

The MMAD is the mediam particle size (half the mass is in particles smaller and half in larger than this size) and the GSD is the ratio of the 84th percentile and the 50% (median) percentile sizes.    Graphs of a real-world aerosol distribution from some of my previous in-plant research are presented below:

The above is a Lognormal Distribution generated using MMAD = 10 microns and GSD = 2

The graph above is the log of the AD size versus percentage on a probit scale


You can see from the top graph that the distribution (like all log-normal distributions) is bounded by zero and skewed toward lower values.   This is because gravity favors keeping the smaller particles in the air longer than the large particles.

As you travel along this distribution you can see the relative contributions to the distribution of the various ADs and therein lays the real value of this analysis.  For example, particles with AD = 2 and smaller represent about 1% of the total aerosol mass in this example. You can also inspect this visually to get some idea as to how much of the total mass of this distribution is in particulate 10 microns AD and lower.  This is, the portion of the total mass of the aerosol that has some reasonable chance of making it to the deep or pulmonary regions of the lungs. 

Using the parameters of this distribution and the ACGIH mathematical relationship listed for respirable mass, one can estimate the exact percentage of the total airborne particulate this is indeed respirable.   In this example, it was about 12%.  This is exactly what respirable mass cyclone personal samplers are supposed to do.    That is if you sampled the example aerosol for total airborne mass concentration and got 10 mg/m3, a concurrent respirable mass sample should come in around 1.2 mg/m3.


This blog was designed simply to provide you with some insight into the characterization of airborne particulate mass as a distribution.    If more than a few of you would like me to go into some of the computational details and spreadsheet software, I would be happy to do so in a future blog.

Sunday, June 22, 2014

Inhaled Insoluble Particles are more than a Nuisance


Many years ago I did my graduate research on the measurement of airborne insoluble particles.  These often  occur from the generation (e.g., spraying or cutting), transport or disturbance of liquids, powders or dusts.  Because of all the work and study I did with these aerosols, I have maintained an interest in this subject. 

Of course, airborne particles are a lot different than vapors.  Compared to vapor molecules they are a lot bigger.  Indeed, they have a lot more mass and instead of being mixed in with the other gaseous molecules in air, they are more or less suspended into the air.  Depending on their size they tend to settle downward either quickly or slowly.   Indeed, knowing the size of the particles is important for other reasons because size determines whether a particle is inhaled and, once inhaled, its chance of making it into the deep lung where gas exchange takes place.

The population of insoluble airborne particles from any source is almost always of variable size which makes the characterization of that size somewhat challenging.   The sizing of a population of aerosol particles will be the subject of next week's blog.

A logical and somewhat convenient way of “sizing” any individual airborne particle is by its aerodynamic diameter (AD).   This convention compares the settling velocity of the particle of interest to that of a unit density sphere.   Thus particles of any density or shape can be characterized for how they will react in the air using this method.   It is interesting to note that I once measured the aerodynamic diameter (AD) of one of my kid’s balloons at about 150 microns (AD).   That is a foot long balloon fell to the ground at about the same rate as a unit density sphere with a diameter of 150 microns (0.015 mm).    On the other hand, 1 micron sphere of mercury will have a much larger AD than 1 micron but you get the idea.

Indeed, the science of aerosols is quite interesting.   Very small particles “see” or experience the air as a very viscous medium compared to their larger brothers.   The more viscous the air appears, the more the particles tend to move with the airstream and not charge through it.   An analogy for us would be that we are very free to move (at least horizontally) through air but we will be much more “stuck” and restricted if we were immersed up to our necks in molasses.  In that case if the molasses moved we would move with it.

As mentioned above the AD size of any particle determines where in the body it might deposit.   Three size measurement classification or “bins” of AD have been defined by the ACGIH going from large to small sizes:


  •      INHALABLE: any particle that penetrates/deposits past the nose and mouth.
  •     THORACIC: particles that penetrate/deposit anywhere within the lung airways and the gas-exchange region
  •     RESPIRABLE: particles that penetrate/deposit exclusively into the gas-exchange region or pulmonary region of the deep lung.
The above classifications describe the proportion of any particular AD size that will fit into that classification or bin.    For example a particle that is 10 microns AD will be considered 77% inhalable,  73% Thoracic and 1% respirable if I read the ACGIH criteria algorithms correctly.   That is, this particular particle will have a 77% probability of being inhaled, a 73% probability of making it to the thoracic region of the respiratory tract and a 1% chance of getting to the alveolar or pulmonary region of the lung.

If you are interested in studying this further I have a set of Power Point slides that goes into a lot more detail including the algorithms which I will be happy to send to whoever ask me at mjayjock@gmail.com  

So what does this mean for the inhalation of insoluble particulate?   When the airborne particle size is respirable (less than 10 microns AD) a good portion of it makes it to the alveolar region of the lung.   The clearance mechanism for insoluble particles in this region of the lung is quite slow.   As such a daily exposure to respirable particles tend to accumulate and potentially overwhelm the clearance mechanism.   This has been termed particulate overload and can lead to irritation and irreversible toxic effects such as Chronic Obstructive Pulmonary Disease (COPD).   

I am not a toxicologist but for many years I have been of the opinion that the OEL for insoluble “Nuisance Particulate” or the more modern term: Particles No Otherwise Characterized (PNOC) should be around 1 mg/m3 (respirable) which is considerably lower than it has been.   I base this view on my review of some of the earlier toxicological studies and conclusions.   Some of these references, data and conclusions are included in the above mentioned Power Point slide deck.


Monday, June 16, 2014

Junk Science, Daubert and the 2 Zone Model


You may have heard about the Daubert legal ruling.  It is an attempt to keep junk science out of the courtroom.  The Daubert standard governs the admissibility of expert witness testimony during all U.S. federal legal proceedings and over half of state proceedings. It allows for the legal challenge of any expert witness testimony. In essence, the expert scientific witness has the burden of proof relative to the validity and acceptability of his or her scientific conclusions.   They can no longer simply claim that they are an “expert” whose opinions and conclusions should be accepted without proof.

An example of a potential Daubert challenge for exposure assessors or industrial hygienists exists in the use of first principle physical-chemical models of human exposure.   A few years ago we were subject to a potential Daubert challenge for some modeling work conducted in anticipation of a court case.  In preparation for this challenge we presented the relevant details of the 2-zone model relative to Daubert in a draft affidavit for the court.   We specifically elucidated the Daubert criteria to address a potential challenge for the Near Field/Far Field (two-zone) indoor air model of breathing zone concentration.  As often happens, the case was settled before we had an opportunity to present our arguments.   We decided to summarize this work in a paper for the benefit of other exposure assessors and Industrial Hygienists for future cases. 

My friends and colleagues Tom Armstrong (fellow modeler) and Michael Taylor (lawyer specializing in occupational health law) published the paper in the Journal of Occupational and Environmental Hygiene as a commentary.   If you want a PDF of this paper I will be happy to send it to anyone requesting it of me at mjayjock@gmail.com.

The paper goes into some considerable detail relative to the two-zone model and the model from which it was derived, the well-mixed box model.   In the paper we present the model theory and discuss our methods, findings, and conclusions below following four key Daubert requirements:

1. Testing of the model
2. Status of peer review and acceptance
3. The technique’s rate of error and standards for the model’s application
4. Personal/professional experience and use of the model.

With regard to “Testing of the Model” we conducted a literature search from which we located and documented multiple studies where the NF/FF technique has been appropriately used and tested. The testing of the model actually involves several stages:

1. Correctness of the mathematical derivation
2. Implementation of the computations and verification of the calculations
3. Testing of the model against suitable real world air concentration data: Verification of model predictions against real world data needs to include situations for the predicted and measured near field concentrations.

Regarding the status of peer review and acceptance, we found and documented multiple peer reviewed publications on the model, coverage in two editions of the AIHA book: Mathematical Models for Estimating Occupational Exposure to Chemicals, and coverage in chapters in two other texts.  In addition, we provided a table that lists numerous studies in a review of the NF/FF model included in a paper by Jennifer Sahmel et al in 2009.  We also included at least 10 others published peer reviewed journal articles using the model to successfully predict exposures to various airborne contaminants.

The models rate of error is actually bound up in its standards for application.   Since all models are idealized portrayals of reality they are accurate to the extent that the reality under investigation conforms with their assumptions.   Indeed, the standards for the model’s application have been stated as the principle “bounding conditions” for the model’s use and can be summarized from the various publications and book chapters as follows:

• The contaminant is instantaneously mixed throughout the near-field and far-field work space.
• There is limited airflow between the two zones.
• The random air velocity between the two zones is uniformly distributed across the NF/FF interface surface.
• There are no significant cross drafts.

Testing the model versus real world concentrations indicates that many scenarios reasonably conform to the above conditions and model predictions typically come within a factor of 0.5 to 2 fold of measured values.   This is good performance for any concentration predicting model.   The paper has all the details and as mentioned above is available to all who ask for it.



Sunday, June 8, 2014

Scientific Uncertainty is the Root Cause of the RBOEL Controversy


Uncertainty drives us; indeed, the reality is that we do not understand what changes are or are not occurring at the tissue level in human beings as a result of their exposure at current OELs. 
 
Using the traditional OEL-setting method of toxicological point of departure (POD) divided by a modifying (safety, uncertainty, adjustment) factor, experts can say that from the available toxicological data their best opinion is that “nearly all” exposed worked are or will be protected at this specific level of exposure (i.e., the OEL).   Those of us who advocate risk based OELs (RBOELs) can say that by using a model or models with stated assumptions there is a range of quantitative risk estimated at the OEL.  

In the current method (POD/SF) the quantitative level of uncertainty is essentially unaddressed, with RBOELs it is out in the open and explicitly indicated.   Indeed, this distinction may lie at the heart of why RBOELs (particularly for non-carcinogens) have not gained any traction within our profession.

It does not take a genius to predict that, using most currently available data, an RBOEL at a target risk of 1 in 1000 at will have a large error band; that is, a large range of risk exists around that 1 in 1000 prediction.   Even more potentially troubling, the risk range predicted using this methodology with many, if not most, of the data associated with current OEL values will be quite high when viewing the top of any reasonably constructed error band.

So what might be done to fix this admitted difficult situation?   I have always thought that the solution to large error bands was confident scientific knowledge.   In this case it would be knowledge of what is actually happening at the human tissue level in target organs during exposure at the OEL.   My sense is that this will only happen when we apply the science of molecular/genetic biology to the questions at hand.  In the past 10 years there has been an explosion of this type of research; unfortunately, I do not know of any specific application of this type of research to our questions today regarding OELs.    Indeed, these scientists in general do not come to our conferences around exposure assessment and even toxicolgoy and, as far as I can tell, there is little relevant interdisciplinary activity in this area.

The bottom line today is that we are all people of good faith working in the trenches to do the best we can with the data we have.  In general, we are forced to deal with this situation without the power to feed the process what it needs most; namely, research resources (spelled M-O-N-E-Y) and data.  The individuals with the real power to allocate or otherwise control the resources to be applied to the entire problem do not have the specific task of setting OELs.   They appear to have a lot on their plate and one of their primary motivations seems to be cost control.  Perhaps, we have simply failed to sell it or indeed, we ourselves also may not realize there is a problem.

For the most part these decision makers are probably satisfied with the current system; however, I personally am not.  In my opinion, using the current system we are most likely adequately controlling most of the risk to worker health from exposure.   The fact remains, however, that we are being surprised at a fairly predictable rate with dangers we failed to address as we were with popcorn fragrance.   Over the years we have been repeatedly “surprised” as the vast majority of OEL changes have been in the direction of significant reduction.  
 
When problems relative to any OEL being too high find us, it is too late for those adversely effected and we as risk assessor or risk manager have failed.    Given this history, I am convinced that there are exposures around current OELs that are ultimately causing adverse health effects in workers.   How much danger is out there and how important it might be to proactively address it is a matter of personal perspective.    I believe that there is currently enough danger and opportunity to avoid adverse health effects to spur us to do better.


The use of RBOELs could be seen as the first step in improving the system.   My sense is that exposing the uncertainty around the estimated quantitative risk at OEL (RBOELs) will drive getting more data as the stakeholders in the process first see these numbers and then confront this scientific and political reality by addressing the uncertainty with more resources and data. 

Sunday, June 1, 2014

Risk Based OELs


This blog is being published during the week of the AIHA Annual Conference in San Antonio, TX.   I do not know if that will be a good week for readers of this blog because of their potentially heightened professional involvement or a bad week because they will be tied up at the conference.   In any event I have chosen this topic for this week because of the importance of this subject matter.

This week’s blog is really the work of my friend and colleague Adam Finkel.   Adam worked at OSHA in the 1990s and was perhaps the earliest advocate for what has come to be known as Risk Based Occupational Exposure Limits (RBOELs).   I have discussed RBOELs in this blog before, to recap they are OELs that are set at some quantitative level of putative risk (1 in 1000 has been suggested) determined by the modeling analysis of toxicological data.   RBOELs have been used for carcinogens by the EPA and others for some time but Adam (and I and others) are proposing that it be used more broadly for all OELs with enough documentation to set an OEL.

Dr. Jimmy Perkins and I have been conducting a series of teleconferences with members of the professional community with a stake and standing in this process.  A few weeks ago Adam presented to this group.   I would be happy to send his full presentation (PowerPoint slide deck) to anyone who requests it at mjayjock@gmail.com.   In the meantime I am going to present some bullet points from his talk that I found particularly frank, to-the-point and compelling.  No one could ever accuse Adam of beating around the bush!

What do the various kinds of limits ACTUALLY tell the worker who knows (roughly) what concentration s/he is being exposed to, but wants to know how dangerous it is?

   The OSHA PELs actually indicate levels that lawyers and economists decided were economically feasible for most or all employers to meet!  There is lots of cutting-edge risk science in the Preambles to the PELs, but the numerical limits themselves reflect (anemic) determinations about feasibility. (the word “anemic” in this paragraph is a personal judgment based on my (Adam’s) 12 years at OSHA– every other word is, I assert, unimpeachable)

   The ACGIH TLVs indicate levels that very smart, energetic, and creative volunteers together decided met some unknown balance of “reasonable assurance of safety” and reasonable achievability in the workplace.  Every such judgment is chemical-specific, not generic.

   At concentrations above or below the PEL or TLV, no knowledge about how safe or how dangerous is or can be transmitted.

The leaders and rank-and-file of the occupational health world are estranged from risk assessment, and the rift is widening

  •     long-standing moral distaste for risk assessment among labor unions, OSHA, NIOSH, etc.;
  •     tendency to blame risk assessment for delays and failures in the regulatory process;
  •     belief among many in corporate OHS that risk assessment is “voodoo” (see next slide –presented in the full slide deck but not herein)
  •     (mistaken) belief that risk assessment is overly “conservative” (see any of 8-10 articles by Adam Finkel on this issue);
  •     unflatteringly defensive posture (“with us or against us”) from the TLV Committee and AIHA;
  •     rise (esp. internationally) of “control banding” and other qualitative “alternatives” to risk assessment


An Action Plan and Cost Estimate for Developing 200 RB-OELs:

Adam envisions a two-step process:

1.  Develop (via a consensus process or an “agree to disagree”         process) science-policy ground rules for proceeding to assess risks:

       which study(ies) to select;
       how to convert exposures across species;
       “reference person” parameters;
       dose-response defaults and process for supplanting them

2. “Turn the crank” for @ 200 common substances with serious and irreversible health endpoints.

Adam thinks this could be done with 10 people, one year, and $750,000.

Of course, there is considerably more technical detail available in the slides which, again, I will be happy to send to anyone who asks for them.