We all probably know by
now that Human Health Risk Assessment is the integration of the ability of a chemical to
cause harm to human health with the actual exposure to that chemical that might
occur in the real world. In terms
of a simple conceptual model: Risk =
(Toxicological Harm/Exposure)X(Exposure).
EVERY chemical will cause toxicological harm at some dose. Pure oxygen breathed for an extended period
is toxic. Indeed Wikipedia reports that
pulmonary and ocular toxicity result from extended exposures to elevated oxygen
levels at normal pressure. Given
a dose high enough, toxicological harm comes from each and EVERY chemical we
can think of. The “what” of that untoward
health outcome is called the HAZARD IDENTIFICATION for that substance. In the above example with oxygen it is pulmonary and ocular toxicity. It is usually the first bad thing health-wise
that happens as you ramp up the exposure level and it can range from irritation
to more serious outcomes like cancer to death. The exposure level over which that bad health effect happens defines the POTENCY of the chemical’s
toxicological effect with highly potent materials causing effects at very low
exposures. Oxygen is not very
potent. By comparison benzene is
very potent in its ability to cause toxicological harm.
The point of all this is
that you cannot begin to do a risk assessment without data on the toxicological
properties (HAZARD IDENTIFICATION and POTENCY) of the chemical of interest. If you have No data you have No risk
assessment unless you really force the issue which will be discussed below.
Unfortunately, what has
sparked a lot of the toxicological data that we have comes from first seeing
the adverse health effect of the chemicals on exposed humans. Benzene falls into this category. So does asbestos and vinyl chloride and all
the other known human carcinogens.
Seeing people stricken by these substances caused them to be
studied. The other category of
well-studied chemicals is pesticides.
This is primarily because they are designed to be commercial poisons so
they are expected to be relatively potent toxicants and clearly worthy of study. As a result they are also highly regulated. How do we address the rest of the chemicals in
our world?
In the late 1990s the
Environmental Defense Fund issued a ground breaking report entitled: Toxic Ignorance (The Continuing Absence of
Basic Health Testing for Top-Selling Chemicals in the United States): http://www.edf.org/sites/default/files/243_toxicignorance_0.pdf It proved with undeniable hard data that, at
that point in time, “…even the most basic toxicity testing results cannot be
found in the public record for nearly 75% of the top volume chemicals in
commercial use.” As you might imagine
it caused quite a stir and the EPA got involved and eventually hatched the High
Production Volume (HPV) Challenge program.
http://www.epa.gov/hpv/. This resulted in considerably more
toxicological data but as you might guess there remains a severe lack of data
for the tens of thousands of chemicals still in commerce to which folks are
exposed every day.
But that takes us to an
even more fundamental question: Why
haven’t we been testing the chemicals that we breathe, eat and touch in our environment to
determine their toxicological effects all along? Why has it taken the cold hand of public
scrutiny and regulation to get things moving?
I think one of the strongest factors addressing this question is the misguided presumption of
safety by those with an economic stake in these chemicals. Many believe at some level that “no proof of
risk means proof of no risk.” This is,
of course, not true; however, for these folks there is no incentive to go “looking
for trouble” by testing the chemicals to identify the hazard they pose and the
potency of that hazard. Toxicology
testing is, or has been, considered to be relatively expensive to do so. Thus, the reasoning goes, why spend money to
give you bad news when you assume it’s safe and the testing would only bring
bad news?
There is another large factor
in this and that is the mistrust of toxicological data. Those who do not like the results point to
the high doses used in the toxicological studies and assert that they do not
relate to or represent the exposures received in the real world and are
therefore unrealistic measures of risk.
We will get into these issues (dose-response extrapolation and setting exposure limits) in future blogs but I think you can see
how the politics might be playing out in all this to have a strong bias towards
not doing toxicological testing.
So at the end of 2013 we are left
with lots and lots of chemicals but relatively little toxicology data to
characterize the hazard and potency of these tens of thousands of substances in
commerce. My sense, however, is that as Bob Dylan once
sang “The times they are a changin…” The
European REACh statute is essentially driving risk assessment by forcing risk assessment and the
assignment of exposure limits based on very prescriptive procedures. I will go over REACh in a future blog because I
believe it has the capacity, given the political will, to ultimately drive the science of
risk assessment. This could conceivably
force much more testing of both toxicity and exposure but that remains to be seen.
.
.
On another front the EPA
is spending considerable money (tens and perhaps hundreds of millions of USD)
and resources in advanced toxicological screening using innovative molecular
biology techniques in a program entitled ToxCast: http://www.epa.gov/ncct/toxcast/
More confident data will begin to
feed the risk assessment process which should ultimately the lower uncertainty
of the analyses. Lower uncertainty will,
in turn, lower the forced overestimation of risk that comes from doing a risk
assessment with a lack of data. Indeed,
in the end this confident knowledge will be much more cost-effective in focusing
on real risk and not spending money on phantom risk borne of uncertainty.