Wednesday, February 4, 2015

How Big and Small Numbers Influence Science Communication Part 2: Understanding de minimis risk

In my last post I talked about big numbers and how they can cause confusion in the minds of the media and the public. In this post I want to discuss the other side of the coin: extremely small numbers and how they can be misconstrued in risk communication. You have all seen headlines like: “Dangerous Levels of Radiation Recorded in Canada as Fukushima Radiation Dangers Continue” and  Oilsands tailings ponds emit pollutants into the air, study confirms”. Well, when I see one of these headlines the question that comes to my mind is: do the reported results represent real risks or do they represent interesting science that has been badly miscommunicated either by ill-informed reporters or activists with an agenda? You might ask why I am such a cynic? Well I know that in my business we have instruments that allow us to identify compounds at extremely low concentrations. As I mentioned in my previous post, our mass spectrometer was able to “see” to the part per trillion (ppt) range. This is the equivalent of a single drop of liquid in a large lake. The problem is, just because I have an instruments that can identify a drop of poison in a lake doesn’t necessarily make that poison a health hazard. Just because a compound is considered “toxic” at high concentrations doesn’t necessarily mean it poses a risk at much lower concentrations. Unfortunately in risk communication this information does not appear to be widespread. There appears to be a population out there who believe that any concentration of a toxin is too high a concentration.

Looking at the hack writer's handbook I am reminded that before I can write any post on toxicity, I have to quote Paracelsus who said: “all substances are poisons; there is none which is not a poison. The right dose differentiates a poison from a remedy”. For anyone knowledgeable in the field we recognize that this is true in a general case but we must add some qualifiers. The critical qualifier is that every individual/species has unique characteristics and as such a dose that may be fatal to one species/member of the community may be relatively benign to another. This can manifest in a number of ways. For instance zinc is harmful to viruses at lower concentrations (it inhibits rhinovirus replication) so taking a zinc lozenge during a cold will slow down the development of the cold virus and help your immune system fight a cold. But zinc is also toxic to humans at high concentrations. So regularly taking mega-doses of zinc to avoid getting a cold could instead put you in hospital. Similarly, an alcoholic who has developed a metabolic tolerance for alcohol can ingest a dose that would kill a person without that tolerance. In toxicology this differentiation is addressed by a concept called the LD50 (mean lethal dose). An LD50 represents a dose (typically in mg/kg) required to kill 50% of a population of test organisms, be they human or fruit flies. Because our endpoints in toxicology are not always death we also have other measures, the most common would the ED50 (mean effective dose) which represents a dose expected to produce a certain effect in 50% of test organisms and the “no observed adverse effect level” (NOAEL) which, as the name suggests, represents a dose that would not be expected to have observable adverse effects in any test organisms.

In toxicology, the goal is to establish the dose-response relationship for a compound of interest. Below some low concentration you would expect to have no effects (the NOAEL). Eventually a threshold is reached where effects are seen. This is referred to as the linear portion of the curve. In the linear portion of the curve increases in concentrations are expected to have commensurate increases in damage. Eventually you reach a plateau called a maximal response level where 100% of test organisms are expected to have the maximum response. For our risk communication discussion the two really important measures are the NOAEL (the threshold below which you would expect no effects) and the “de minimis zone”. In toxicology the term de minimis is used to refer to a risk that is negligible and too small to be of societal concern (ref). In the de minimis zone we have compounds that could theoretically have some effect, but the effect does not represent a general concern to society. To explain, toxicology is extremely conservative discipline. The aim is to be safe and in order to be safe we add layers of conservatism on top of layers of conservatism. Think of it like designing an elevator. In designing an elevator to hold 6 people you want to design in a degree of safety (conservatism) into the design. You would not want the elevator to collapse if seven people happened to push their way in, nor would you want it to collapse if six sumo wrestlers decided they wanted a ride. So elevator engineers add an order or two of conservatism into their calculations. While their elevator may be rated for 1000 kg it might actually be designed to hold 10,000 kg without failing. Similarly with toxicology, since we cannot test compounds on humans, a lot of the testing is on animals. Once an ED50 is established in an animal species a layer of conservatism is added as we move up trophic levels. Usually we start with an animal-to-human uncertainty factor of 10. So a dose that had an ED50 of 10 in a mouse would be reduced to 1 for humans. This gives us a safety fudge-factor. Going back to the concept of de minimis, given all the layers of conservatism built into toxicity there will be exposures to chemicals that are so small that even though they exceed the bottom end of the NOAEL threshold they still pose no significant risk to the population at large.

As a quick side note, there exist a second class of compounds that don’t follow this classic dose-response curve, they are called “non-threshold carcinogens”. Non-threshold carcinogens are compounds that do not have a lower bound of safety (no NOAEL). Benzene is one such compound. For each molecule of benzene you ingest you increase your likelihood of getting cancer by a finite amount. I will not address them further in this article for fear of going overlong but didn’t want anyone complaining that I was unaware of their existence.

So let’s go back to looking at the headlines presented above. The second headline dealt with a study on a class of compounds known as polycyclic aromatic hydrocarbons (PAHs). PAHs are petroleum hydrocarbon constituents and have been in the news a lot because tests for PAHs have found them in the air, rivers and lakes of Northern Alberta (where they are typically thought to be associated with bitumen extraction activities). But PAHs are also generated every time you burn fresh wood or overcook your steak on the bar-b-que. On our planet the single biggest non-anthropogenic sources of PAHs are uncontrolled fires (forest fires and biomass burning). Some PAHs are human carcinogens in high concentrations, but curiously enough, in lower concentrations we simply shrug them off. It has been posited that over the course of human evolution we, as a species, were naturally exposed to regular, low doses of PAHs from cooking and forest fires. The result is that our bodies have evolved a mechanism to essentially ignore low concentrations of PAHs in our blood streams. It is only when the concentrations cross the threshold (that is different for each PAH) that a negative response occurs.

In the report above, the tailings pond was responsible for potentially releasing as much as a TONNE!!! of PAHs a year into the atmosphere. Now a tonne sounds pretty terrifying for any “toxic” compound until you put that number into perspective. It has been estimated that in North America forest and prairie fires produce around 19,000 tonnes of PAHs every year (ref). Since the oil sands are found in the boreal forest region where massive forest fires are a yearly occurrence, this tonne of PAHs suddenly becomes less and less significant from a human and ecological health perspective. Similar the news was full of frantic headlines when the report “Legacy of a half century of Athabasca oil sands development recorded by lake ecosystems” came out last year. In this study PAHs were found in a number of Alberta lakes. A careful reading of the report, however, actually demonstrated that concentrations of PAHs in the lakes reported as “oil sands lakes” were entirely comparable to the concentrations in the lakes that were highly isolated from the oil sands, the “control” lakes. This finding was not particularly unexpected but that fact was not advertised by the activist community who trumpeted the "oil sand" lake results. Why wasn’t it surprising? Well, recognize that the oil sands represent a massive regional feature. The reason strip mining of oil sands was initiated back in the 1960’s and 1970s is because the material literally sits on the ground surface in that area. For millennia, rainfall has been washing this material into the rivers and lakes of the region. Also being boreal forests, yearly forests fires have been liberally spreading PAHs into the airshed. The reason we are only hearing about PAHs in the air and water of Northern Alberta now is the presence of oil sands extraction (and associated research funds) and that only recently has anyone been able to build mobile mass spec units that could be used to measure air concentrations around Fort McMurray. In my mind had anyone been able to find a lake in the region that did not have detectable concentrations of PAHs, now that would have been a noteworthy discovery.

Since this post is getting awfully long, I will stop here. My follow-up post will continue from here to discuss how we decide whether a compound is “toxic” and what type of risks we should be able to ignore and which risks need immediate action.


  1. There is third class called hormesis, where a small dose has a beneficial effect, like fertilisers.

  2. Quite right, sadly to keep the post user-friendly I didn't go into the detail I would typically like on a number of topics. In this case, though, hormesis had totally slipped my mind.

  3. Similarly there is an issue with science communication related to particulate matter air quality health impacts. Later this month EPA is hosting workshops to review the science on the health effects and other impacts of particulate matter pollution, review the levels of the PM air standard and also to weigh information about the effects of very small ultrafine particles.

    Cynic that I am, I bet very little attention will be paid to a puzzle in the data, Specifically, childhood asthma rates have increased while at the same time ambient air pollution levels have decreased. For example, EPA’s air quality trends website ( shows significant decreases in all pollutants but at the same time the U.S. Department of Health and Human Services Centers for Disease Control and Prevention National Center for Health Statistics notes that “Childhood asthma prevalence more than doubled from 1980 to the mid-1990s (9:10) and remains at historically high levels”.

    The ultimate issue is that EPA justifies lower air pollution emission limits because they claim reductions will improve health outcomes. My question is that given the large reductions in ambient pollution where are the improvements in health outcomes? Show me those results!