This blog posting is a reminder about the difficulties communicating good science both in the media and to our fellow scientists and how challenging it is to communicate to both audiences simultaneously. This blog posting is derived from a three-way Twitter discussion I had with Dr. Jay Cullen (a Marine chemist, oceanographer and Associate Professor at University of Victoria, School of Earth and Ocean Sciences) and Suzy Waldman (a PhD student studying risk communication at Carleton University). The discussion related to an article in the Victoria Times Colonist titled B.C.’s citizen scientists on alert for radiation from Japan. The topic of our conversation was both my and Ms. Waldman's initial negative perceptions based on our independent reading of the article. As many of you know, I wrote a blog post in December on bad representations of risk in reporting of the Fukushima plume. As such I have been sensitized to the topic. Ms. Waldman, also being in the field, also likely has very similar sensitivities.
Our discussion started off on a rancorous note (okay maybe even
accusatory on my part). Both Ms. Waldman and I had read the article and I
tweeted a couple tweets towards the scientists quoted in the story
indicating that they might be responsible for overblowing the risks associated with the identified cesium isotope ( 134 Cs) plume.
Dr. Cullen rightly took exception to my tweets because, as he pointed
out, a clean reading of the article pretty clearly demonstrates that
he had gone out of his way to explain why the radiation identified
actually did not pose a significant risk to human health or the environment.
Our discussion eventually turned to why Ms. Waldman and I had both experienced the same apprehension in spite of
the article's contents.
Ms. Waldman posited that the formulation of the article
was a key. She pointed out that the first half of the article was full of
really loaded words: "fallout, disaster, Chernobyl, peak, plume,
radiol'l health risk". For those of you unfamiliar with risk communication
these types of words are commonly called "dread" words. Dread is a term used by Dr. Paul Slovic in his seminal article on risk communication called "Perceptions of
Risk". For those of you without access to journal articles Dr. Roger
Pielke discusses the topic in this blog
posting. Essentially dread words draw a visceral reaction
from the reader irrespective of their positioning or formulation in an article. They typically
represent risks or fears over which the reader has no control and often
represent risks that cannot be viewed or experienced using standard human
senses. In re-reading the article it is clear that Ms. Waldman has a good point. The sheer number of dread words in the first couple paragraphs would be
enough to get most readers feeling uneasy. I have previously
suggested that any reader who wants a non-technical but compelling read on
the subject of Risk should get the book by Dan Gardner of the same name. In the book Mr. Gardner discusses how fear and risk are used by organizations and writers to advance their causes. In this case, it is likely that the language was recognized by the writer (or his/her editors) as useful in pulling eyes to the page and to keep readers reading.
My experience in science communication is different from Ms. Waldman's and as such the one thing that really drew my eyes in the article was a line about the health risk of the cesium radioisotope:
"It can pose a radiological health risk because it tends to
concentrate in organisms,” Cullen said. “[But] health physicists suggest the
exposure of consumers to these fish don’t pose a danger to anybody’s health.”
My first response to this statement was strong annoyance. Surely what Dr. Cullen meant was that health physicists would view the risk as being negligible (using the de minimis principle). To put it in perspective in my previous posting I pointed out that 5 becquerels represents the approximate radiation in a little less than a half of a banana. Oddly enough my annoyance at that sentence caused me to miss how Dr. Cullen went on to relate that the exposure "is expected to be three to five becquerels per cubic metre of water. Canadian guidelines for safe drinking water impose a limit of 10,000 becquerels per cubic metre, he said". After re-reading the piece I realized that I had been caught in the "good news/bad news" trap. For those unfamiliar with the concept, research shows that when good news is presented immediately following bad news (as in this case) the reader/listener can get unsettled by the bad news and become less receptive to the ensuing good news (ref).
So by my count it was Scientists - 2, readers - 0, but I also realized that Dr. Cullen may have unconsciously been
partially to blame for my unease. I say unconsciously because I feel that in his interview he made the mistake of speaking "in Science" to a
non-scientist. As a non-academic scientist, I am reminded to assume my audiences are at
least as smart as myself and to speak to them accordingly. Speaking down to an
audience almost always ends badly. The corollary to that statement is
that the public is not always familiar with the way scientists communicate. As
scientists we are trained with a language (jargon) all of our own. When my wife
says something is "significant", she means "important";
when I say the word I mean that it met a pre-determined level of
significance (i.e. p<0.05). The risk professionals in our
office will never say that something has "no risk". Even if the
risk is only one in ten billion it still represents a risk. Instead a risk
professional will say it has "acceptable risk", where the level of acceptance is based on the conclusions of a reputable scientific or health
organization. Similarly, when discussing other people's work I will often
use correlation terms like "suggest" and "indicate" as
they give me wiggle room when I don't have exact figures (like exact
levels of statistical significance etc..) at hand. For me these words tell my fellow technical people that:
"I don't have the numbers at hand but can assure you the information I
am relaying to you is correct". I tend to use to causation words only when speaking in my area of expertise.
I am guessing that this is what happened
to Dr. Cullen in this case. Using typical scientific diligence he used the
softer term "suggest" rather than a stronger term or phrase. Had he
said "Health Physicists will tell you (or assure you) that
exposures....don't pose a danger" it would have left the reporter with no doubt (or ability to manufacture doubt). Unfortunately, when a reporter hears the
word "suggest" he/she is more likely to understand it in a
literal sense and not the common scientific usage. That is: health physicists "propose"
or "put forward for consideration that exposures....don't pose a
danger". This literal definition completely misconstrues the level of
doubt associated with the risk. Moreover, if the writer's editor was looking
for a quotation that could be interpreted broadly (perhaps to move the article closer
to the front of the paper) this would be the line to fixate on.
Ultimately, the confusion comes down to a common problem
we, as technically-trained, practicing scientists, face every day. We are trained to avoid making errors of confidence (there's that Type I error thing coming to haunt us again). When working and communicating with our peers we have the tendency to use language that is less than definitive, understanding that our meaning will be understood by all. Unfortunately, we operate in a world full of people operating blissfully in a Dunning-Kruger state who are willing to express a level of certainty that an expert cannot ethically match. As specialists we have to remember to turn off our science blinders and speak
with forethought about how our words will be read, interpreted and possibly misinterpreted. Certainly we can use correlation words but stick to the ones that have comparable usage in the vernacular. I cannot reasonably say the radiation poses "no risk" but I can certainly say it poses "negligible risk".
Coming from the other side, as scientists we also have to give our colleagues some slack. We have to recognize when our colleagues are trying to relate complex technical topics in a manner that is understandable by the public. Give your colleagues the benefit of the doubt and start with the assumption that they are acting in good faith even when (especially when) you disagree with them. Piling on and picking nits when a colleague is trying to make complex technical information understandable for the general public does nothing to expand our knowledge and only hurts the cause of good science in public.
Author's note: please don't interpret my use of the names Ms. Waldman or Dr. Cullen as an attempt to give one greater credibility over the other. I was brought up in the old school where one did not use another's first name in discussions unless one knew the individual well enough to justify the use. I apologize in advance if my traditional style of writing may have offended.