I am often asked to review data on environmental exposure to
a specific chemical as part of a human health risk assessment for the general
population to that chemical. This
usually means reviewing the available scientific literature for the existence
of the chemical in various medium including ambient and indoor air, water,
environmental surfaces, soil and indoor dust.
The basic idea is to determine where the chemical might be showing up
and whether the concentrations being measured might present a risk to
people. The premise is fairly
simple. It is assumed that people:
- Breath air
- Touch surfaces with various body parts
- Have hand-to-mouth activity that results in ingested dirt, dust and their contaminants
The literature search includes where and in what products the chemical is used
which provides some insight into where and how exposures might occur.
The heart of the search is typically monitoring studies done
on the above mentioned media in homes and other occupied spaces. Since we spend most of our time indoors,
that part of the process makes sense to me. We are then presented with data on the
measured concentrations in these media.
The data are usually highly variable with a large range of
concentrations and here is where the difficulty begins for me.
If the media sampled were taken at random then how do I
determine if enough samples were taken to present a reasonably complete
picture? Imagine, if you will, a
situation in which only 1% of the US population is being overexposed in their
homes to Compound X. That represents
over 3 million people. The binomial
theorem advises us how big a sample one would need to take to see a 1% response. If any of my readers would like me to go
through the math I can do so in a future blog. It is a valuable math tool and I would be
happy to do so. If, however, you are
willing to trust that I did it properly, I can tell you that taking 100 samples
(i.e., sampling 100 homes) at random has a 37% chance (that is, greater than 1 in
3 probability) of NOT seeing the 1 house in 100 that is being overexposed to
the chemical.
So how many houses have to be sampled at random to find the
1 in 100 with a confidence level of 95%
of seeing it or only a 5% chance of NOT seeing it? That answer is a sample of 300 houses. These are not whimsical predictions, the
binomial theorem is as real as it gets.
It is why neither insurance companies will lose money on their policies
nor will casinos lose on their games of chance.
I know of few monitoring studies that have gone into more
than a few score of homes and none that have sampled in 300.
What is the lesson in all this?
There are two big lessons in this for me. One, never discount a HIGH value unless you
have real evidence that it truly is an artifactual outlier.
The second lesson, is that we need to do a better job of
understanding the potential sources of the chemical then forgo at least some random sampling
but go directly to and sample houses with these sources.
If the concern is acute exposure then the sampling needs to
happen proximate to the initiation of a fresh source with potentially waning strength.
If the concern is the migration of a semi-volatile organic
compound (SVOC) out of a plastic or foam matrix and into dust, then be sure to
give the source enough time to reach equilibrium before sampling, perhaps as long as a few
years.
All of the above monitoring could be helped and informed by laboratory experiments and modeling but you heard that song from me before.
I would love to hear your comments on these issues and any
experience you might have with them.
Thanks for a great blog. I have never had enough funding to do sampling so I had to rely on literature search and modeling. Experimental data (even if surrogate chemical) are always useful if you have them.
ReplyDelete