Monday, September 21, 2015

Why the West Coast’s gas prices are so high and who is to blame


Early in my blogging career I wrote a blog piece discussing factors that affect gasoline and diesel prices on the West Coast. The post was called A Primer: Why Cheap Oil Doesn't Mean Cheap Gasoline or Diesel and dealt mostly with how gasoline is created in refineries. Well, the topic has come up again and once again we have people complaining about gasoline and diesel prices on the West Coast in a world of low oil prices. Most recently the National Observer had a post of the subject “Canadians get ripped off at the pumps” produced by a local economist Robyn Allan. Having read that article I suppose it is about time I updated my earlier post and addressed some of the obvious shortcomings in Ms. Allan’s piece in the Observer.
The first thing you need to know to understand gasoline prices on the West Coast is that it is all about supply and demand and has very little to do with the price of oil. The reason for this is simple: it is not oil that you put in your gas tank; it is gasoline and diesel, both of which are refined products. In my earlier post I give a description of how we convert oil into gasoline and diesel and pointed out that there is a limit to how much gasoline and diesel can be generated from a barrel of oil. This is especially problematic with respect to diesel fuel since the component of the crude oil mixture used to generate diesel fuel is also the same one used to make kerosene and fuel oils (for household heating). The diesel market is, thus, heavily affected by the current and future market for fuel oil (especially in central and eastern Canada where fuel oil is heavily used for home heating).
As I note above we can’t use crude oil in our fuel tanks, we need to use refined petroleum products and we all know where refined petroleum products come from: refineries. So it is not just the amount of oil on the market that defines the price of gasoline but the ability of the refineries to convert that oil into useful things like gasoline and diesel. That is not all, however; once we have refined the oil into gasoline we still have to transport it to market. All the refined gasoline in the world does you no good if it is stuck on the East side of the Rockies. These are the portions of the story where Ms. Allan falls off the rails in her analysis in the National Observer. In her piece she pretty much ignores the two critical bottlenecks in the progression from oil in the ground to gasoline in your tank: refinery capacity and transportation capacity. Today I am going to deal with refining capacity.
 As anyone who follows the oil industry knows, we on the Canadian West Coast have allowed our refining capacity to wither and die on the vine. Historically, there were several oil refineries on the West Coast including the Chevron refinery (still open), the Imperial Oil Ioco refinery and the Shell Refinery. Thanks to regulatory hurdles and market forces we are now down to a single refinery (Chevron) which is able to handle about 57,000 barrels/day (b/d) of oil. To put that number into perspective, the Chevron refinery only supplies about 25% of B.C.’s commercial fuel supply and 40% of YVR’s jet fuel needs. As a consequence we import a LOT of fuel from refineries in Alberta (mostly around Edmonton). According to Natural Resources Canada, we import almost 60% of our petroleum product needs via pipeline and gasoline tanker cars (by rail) from Alberta. Unfortunately, even that is not enough and so we are also dependent on the big refineries in the Puget Sound for things like aviation fuel (from Cherry Point) and additional volume when the prairie market gets too tight. Because that additional fuel is bought on an irregular schedule it is subject to the whims of supply and demand. This makes US supplies a critical consideration in any gasoline price discussion in BC.
The United States has broken their petroleum market up into five Petroleum Administration of Defense Districts (PADDs). This was originally done during the Second World War to ensure energy supplies but is still in effect to this day. The West Coast of the US, including California, Oregon and Washington, make up PADD V. PADD V is a rather unusual district because of its geography (it is mostly bordered on the east by mountains). Unlike the other districts, which are linked internally with lots of pipelines and combined capacity; PADD V is pretty much stuck on its lonesome and has to be self-sufficient. There are some minor cross-PADD connections but mostly when something goes wrong in PADD V it hits the entire region.  Well this year has been a tough one for PADD V. In February, a major fire shut down the Torrence refinery in California. Torrence is the third largest refinery in California and supplies about 10% of California’s gasoline supply (remember the California gasoline market is essentially equivalent to the entire Canadian gasoline market). The loss of Torrence meant that all of the other refineries in PADD V had to make up the difference. All of a sudden the Puget Sound didn’t have an excess of fuel to sell to British Columbia as it was being sold in California.
In addition to the Torrence issue, the American mid-west was also having a bad time. For most of August the BP Whiting Refinery in Indiana was also shut down. This left a huge crunch in the market in the prairies as mid-west suppliers were offering top dollar for gasoline from Alberta. This left BC in a pickle. Alberta didn’t have any cheap gasoline because it was all going to fill a need in the US mid-west and PADD V didn’t have any cheap gasoline because of the fire in Torrence and another disruption in April.  We were the equivalent of the lonely traveler wandering into town, during a nasty storm, in the middle of convention season and demanding a room. Without a reservation (firm, regular, fixed-rate contracts for gasoline) and without any alternatives (since Edmonton couldn’t help us) we ended up having to pay top dollar for our gasoline. Thus we had $1.20+ gasoline in a world where the oil price was below $60/bbl.
Of course the piece in the National Observer completely ignored these conditions. In the Observer it was all the greedy oil companies’ faults that we could not get cheap gas. No mention was made to the red tape, fuel access restrictions (pipeline capacity) and bad political climate that scared all but one of the local refineries out of the market. No mention was made of the work to block expansion of pipelines that would have allowed more refined gasoline to move east-west across the country. No mention is made about protestors that locked down the Chevron refinery further curtailing supply.
The truth is that we as Canadians have brought this down upon ourselves. We made it uncomfortable for refineries to exist in BC by limiting supply of crude (by fighting pipelines) and adding red tape. In doing so, we have made ourselves utterly dependent on refineries in Alberta and the Puget Sound to keep our cars and buses running. Like so many other environmental fields (see my post on rare earth metals) we have off-loaded all the environmental costs to other jurisdictions and lived like environmental free-loaders letting others take the risks while we reap the rewards. Well now it is time for our chickens to come home to roost. We are not getting “ripped off at the pump” as Ms. Allan would claim; rather we are getting a well-justified comeuppance. We made a politically expedient decision to limit the production and transportation of a critical component of our economy (refined fuels) and so now have to pay the price for that decision when regional supplies are low. The ironic part of all this is that from an environmental point-of-view this is a good thing. By making the fuel more expensive we will force people to use less of it. This is supposed to be a good thing. Why is that ironic? Well because a media outlet like the Observer is the one complaining most loudly about the problem. That the Observer would turn around and complain that the outcome they have been working towards has come to pass? That is just rich!  

Thursday, September 17, 2015

Debunking the Leap Manifesto - Demand #9: Local agriculture is not always better


I have been asked numerous times in the last couple days what I have against the “The Leap Manifesto”? My answer is simple: The Leap Manifesto is of particular interest to me because it touches so close to my intellectual home and it annoys me because it is demonstrably lacking in scientific rigour. As I have written numerous times before on this blog I am a Pragmatic Environmentalist who believes in evidence-based environmental decision-making. My personal goal is to help make demonstrable and tangible improvements in our country’s environmental performance. As a pragmatist I am not the type of person who would suggest that it is sensible to “leap and the net will appear”, nor am I a newbie in this field. As I noted in an earlier post Environmentalism and Pragmatism, the two aren't mutually exclusive - A blast from my past I wrote about my own personal “Pragmatist’s Rules of Engagement” back in 1995. So to further answer those questioners: having worked a lot, read a lot and seen a lot I figure it is up to people like me to inject some science and defensible data into a debate that seems mostly about politics and emotions. If we waste all our built-up moral capital on emotionally-charged and scientifically-indefensible projects (like the Leap Manifesto) then we won’t have any to spend when it comes to making real changes that can make tangible improvements locally, regionally and nationally.
Having addressed Manifesto’s Demands #2, #3 and #6 in my previous post I thought I should take another shot at this document by looking at another environmental fairy tale: Demand #9
We must develop a more localized and ecologically-based agricultural system to reduce reliance on fossil fuels, absorb shocks in the global supply – and produce healthier and more affordable food for everyone
The “smaller is better”, “local is better”, “organic is better” memes in agriculture are some of the most pernicious myths to come out of the modern environmental movement and show a profound lack of understanding of how food is grown and energy is used. I would argue this goes back to the urban nature of most of our modern environmental activists but that is more of a personal opinion rather than a statement based in defensible facts. In a previous post Modern Environmental Fairy Tales: "Moving Back to the Land" and the 100 Mile Diet I discussed the modern “Arcadians” described by Martin Lewis in his 1992 book “Green Delusions”. These modern Arcadians seek to return us to a more pastoral time when we lived with a “more localized and ecologically-based agricultural system”. What they and their more recent confreres the Degrowthers and the authors of the “The Leap Manifesto” seem to have forgotten is why we migrated from that “pastoral” lifestyle in the first place. The reason is simple: during those “pastoral” times in our ancestral past people lived lives that were "solitary, poor, nasty, brutish, and short". Given our current human population density any attempt to move back to the land would be devastating to both the human population and to the ecosphere.
The minimum amount of agricultural land necessary for sustainable food security, with a diversified diet similar to those of North America and Western Europe (hence including meat), is 0.5 of a hectare per person. This does not allow for any land degradation such as soil erosion, and it assumes adequate water supplies. Very few populous countries have more than an average of 0.25 of a hectare. It is realistic to suppose that the absolute minimum of arable land to support one person is a mere 0.07 of a hectare–and this assumes a largely vegetarian diet, no land degradation or water shortages, virtually no post-harvest waste, and farmers who know precisely when and how to plant, fertilize, irrigate, etc.. In India, the amount of arable land is already down to 0.2 of a hectare; in Philippines, 0.13; in Vietnam, 0.10; in Bangladesh, 0.09; in China, 0.08; and in Egypt, 0.05. By 2025 the amount is expected to fall to: India, 0.12 of a hectare; Philippines, 0.08; China, 0.06; Vietnam, 0.05; Bangladesh, 0.05; and Egypt, 0.03 (ref).
As of the year 2000, the US Northeast had a population of 49.6 million people who live with a population density of 359.6 people/km2. This translates to 0.69 acres per person. If we returned to the land there would barely be enough land to support the population of the US Eastern Seaboard with a minimal vegetarian diet. Moreover, this “pastoral” lifestyle would not be conducive to centralized services like sewage and water. Without modern sewage treatment and water supplies the population would undergo massive “Degrowth” as diseases and weather slowly eliminated the majority of the population. As for electrical supply, under the 0.44 acre scenario, power would be supplied by solar panels. Solar panels will certainly supply a house in South Carolina with reliable power in summer, but the same cannot be said about those same panels in a northern winter. Consider the “Snowpocalypse of 2015” and think about how those solar panels would provide power in the middle of one of the coldest winters on record, while buried under two meters of snow?
As for nature, once you discounted the areas where humans cannot farm (bogs, lakes etc..) there would not be an unallocated acre on the Eastern Seaboard. There would be no room for growing crops for profit and more importantly there would be no room for nature of any sort. I don’t see that existence as being in harmony with nature as much as being utterly antithetical to nature.
As for the importance of “localized” food and food security, as I wrote in another blog post:
From an environmental perspective regional self-sufficiency in food is a loser. Large-scale farming, with its ability to maximize crop yields and thus reduce land needs, is a necessity in a world of 7 billion souls. Anyone really interested in this topic should read The Locavore’s Dilemma by Desrochers and Shimizu. They comprehensively deconstruct the environmental arguments for the 100 mile diet and the concept of “food miles”.
Activists point out that the food then needs to be moved by ship or airplane but Desrochers and Shimizu point out that 82% of the estimated 30 billion food miles associated with U.K.-consumed food are generated within the country, with car transport from shop to home accounting for 48% and transport to stores/warehouses representing 31% of food miles. As for carbon dioxide equivalents, as Tasmin MacMahon notes in Macleans: research from the U.K. comparing local tomatoes with those imported from Spain showed the U.K. tomatoes, which had to be grown in heated greenhouses, emitted nearly 2,400 kg of carbon dioxide per ton, compared to 640 kg for the Spanish tomatoes, which could grow in unheated greenhouses.
As for the line from the Manifesto about this food being “healthier” the research is definitive on that score as well. Organic foods are no healthier than food from non-organic farms. Meanwhile, the widespread use of “natural” fertilizers in organic farms can lead to the contamination of groundwater supplies with nitrates and in exceptional cases animal wastes and e-coli. While factory farms have their own fertilizer/waste issues, they tend to be much more tightly regulated and have the financial wherewithal to invest in the most efficient treatment systems. Not to mention that in sufficient quantities/qualities, their outputs can actually have some value on the open market.
As for the suggestion that local food would be more affordable than commercially bought food can be demonstrated as false on its face. The primary driver for food prices are input costs and small, inefficient farms have higher costs/per bushel for virtually every foodstuff known to mankind. For proof I suggest you go to your local community market and compare the costs of the market vegetables as opposed to those at your local grocery store. Alternatively look at the charts in Desrochers and Shimizu or go look on the shelves of your local “Whole Foods” outlet.
As I describe above, locavores, 100-mile dieters, modern Arcadians and Degrowthers all continue to suggest that local is better for you, and better for the environment. The problem is that all the research on the topic says exactly the opposite. Local food may make you feel better about yourself, but it uses more energy and fertilizer per bushel to produce and deliver to your table; is no healthier than the alternatives; is less efficient necessitating more land per bushel to produce and every acre of nature carved out for a small, inefficient hobby farm is one less acre where nature can be allowed to flourish. For the authors of the Manifesto to suggest that localized food production be a goal would run exactly contrary to the idea that agriculture be ecologically-based. Modern agricultural practices are the only reason the earth can feed 7+ billion souls while still leaving any room for nature to have an opportunity to do its thing with minimal interference from humans.

Tuesday, September 15, 2015

A Chemist looks at the Leap Manifesto and finds it wanting

This morning as I was enjoying a well-earned coffee break a fascinating announcement lit up my Twitter feed. It was about “The Leap Manifesto”. By the breathless tweets I expected a highly-researched document full of insight and new ideas, maybe like An Ecomodernist Manifesto that I blogged about earlier this year. To my disappointment I found a minimalist web page almost completely free of useful references or critical details.
Looking deeper, I went to the “sign the manifesto” section where I observed “The 15 Demands” which apparently form the meat of this Manifesto. These demands range from somewhat reasonable to the ridiculous to the sublime and would take numerous blog posts to address individually. Happily for me, I have been writing this blog for almost a year and the Manifesto addresses a number of topics I have previously covered in detail. That being said no one is going to sit and read 5000+ words on this topic so I tonight will stick to my area of blogging expertise and address Demands #2, #3 and #6.
Demand #2 says the following:
The latest research shows we could get 100% of our electricity from renewable resources within two decades; by 2050 we could have a 100% clean economy. We demand that this shift begin now.
This demand is the only one of the lot that actually has any references associated with it since it is discussed on the cover page of the web site. The statement references two documents:
Sustainable Canada Dialogues. (2015). Acting on climate change: Solutions from Canadian scholars. Montreal, QC: McGill University
Regular readers of this blog will k now well how I feel about these two documents.  The first is a feel-good document written in a policy-orientated style that fails to impress. The major problem with the document is that it has been written by urbanites who appear completely unaware of the scale of our transportation issues in Canada. I will not go further into that concept until later (Demand #6). Instead I will hit the bigger target: the Jacobson paper.
I have already written a couple very detailed blog posts on the Jacobson paper. The two blog posts are nominally about a follow-up paper but both primarily detail shortcomings in the Jacobson 2011 Paper. The first: Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part I Why no nuclear power? addresses serious shortcomings in the Jacobson model with respect to nuclear power. The second: Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part II What about those pesky rare earth metals? points out that renewable energy technologies depend heavily on a rare earth metals. As I point out in another blog post On renewables and compromises Part II Rare earths in renewable technologies  (and a follow-up blog post at the Huffington Post which I will discuss later) we simply do not have a supply of rare earth metals necessary to address the needs of the facilities suggested in Demand #2.  It is lovely to demand that the government do something but before you make a demand you might try to determine whether accomplishing the demand is even possible?
Arguably the first half of Demand #2 (100% electrical energy in 20 years) may conceivably possible, with a Herculean effort, but the part about achieving 100 % clean energy by 2050 (i.e. 100% fossil fuel free energy status) is simply a pipe dream. I did an intellectual exercise detailing what it would take to achieve a fossil fuel-free British Columbia, the short version is here: Dispelling Some Myths About British Columbia's Energy Picture and the more detailed version is here: Starting a Dialogue - Can we really get to a "fossil fuel-free BC"? The take-home message from those pieces: In order to achieve a "fossil fuel-free B.C." we would need to somehow replace the almost 60 per cent of our energy needs currently being met with fossil fuels through alternative sources. Given that BC, which is incredibly rich in hydro, cannot reasonably achieve a fossil fuel-free status in the timeline presented the idea that Saskatchewan or Ontario could achieve similar results without a heavy investment in nuclear power, is simply inconceivable.
This brings us to Demand #3
No new infrastructure projects that lock us into increased extraction decades into the future. The new iron law of energy development must be: if you wouldn’t want it in your backyard, then it doesn’t belong in anyone’s backyard.
Demand #3 is a typical NIMBY/BANANA demand and reflects a common misconception about energy amongst the non-technically inclined. I address the problem in detail in another blog post On Renewables and compromises, Intermission: Energy Density and Power Density which points out that while our modern society is very power-hungry and uses a lot of energy, most renewable energy sources have very low energy density. Energy density is the amount of energy stored in a unit of mass or volume. The thing that makes fossil fuels so attractive to our society is that they represent a very dense energy source. The reason that fossil fuels are so energy dense is that Mother Nature has done the all-important job of converting the power of the sun into a biological form, then geology compressed it from a less dense to a more dense form. Large energy projects cannot, by their nature, reasonably be put in every person’s backyard. If we are going to survive in a renewable energy future we will need a lot of energy from hydro and geothermal sources and you simply can’t put a commercial-scale geothermal or hydro facility in anyone’s backyard.
To put it into perspective, solar, the highest density renewable, has a theoretical power density of up to 200 W/m2 but that the best solar collection systems seldom do better than 20 W/m2 (in desert solar photovoltaic farms). The further north (or south) you go the lower the theoretical maximum, and thus the lower the resultant systems. A truly exceptional visualization of this is presented by David Mackay at http://withouthotair.blogspot.co.uk/2013/06/david-mackays-map-of-world-update.html. As for the remaining renewables, the best biofuels can achieve about 2 W/m2 while wind can achieve a maximum of about 3 W/m2. As Dr. Wilson points out, since Germany and the United Kingdom consume energy at a rate of approximately 1 W/m2 in order to supply either country with power using wind they would need to cover half of their total land mass with wind turbines which is not a realistic option in a country with cities, farms and forests. Even with that density, the country would be powerless in the dead of winter or on any wind-free evening.
As for these energy systems, as I mentioned above, they cannot function without rare earth metals and as I point out in my blog post Our Demand For Renewable Energy Comes With Canada's Dirty Little Secret rare earth metals facilities are neither small nor are they clean and they certainly do not fit under the “new iron law”. The activists who prepared these demands appear to be unaware of where the wood, metal, concrete and aluminum needed to create their infrastructure actually comes from. None of these can be scaled down to what you would build in your backyard.
NIMBY only works if you are rich enough to be able to import your raw materials from somewhere else. While I agree that most of the initial signers of the Manifesto might be that rich, the rest of us aren’t and so we will continue to need to hew wood and draw water.
I must say of all the demands the one I find most amusing is Demand #6:
We want high-speed rail powered by just renewables and affordable public transit to unite every community in this country – in place of more cars, pipelines and exploding trains that endanger and divide us.
I cannot imagine greater proof that this list was written by a bunch of urbanites than a suggestion that we connect the country (and all cities) by high-speed rail, powered by renewables. As I wrote in my blog post Dispelling Some Myths About British Columbia's Energy Picture
With improved transit and smart planning we should be able to reduce our energy needs for transportation; but the vast majority of British Columbia cannot be served by mass transit. There is simply not enough money available to give every driver from Creston to Fort Saint John and from Invermere to Prince Rupert an alternative to driving. That means that for most of British Columbia, we will still need personal vehicles.

Moreover, all the transit in the world will not address the need for panel vans and light trucks. Contractors, suppliers and salespeople cannot rely on the transit system. Try to imagine a plumber attempting to transport a new sink or toilet and all her supplies/tools to a job site on a bus?

Finally, no amount of transit will reduce the need for the transport trucks that bring the groceries to market and supply the boutiques of Vancouver. The last time I looked it was pretty much impossible to move a pallet of milk or apples on SkyTrain.

Given our current technological state we are nowhere near a position where British Columbia can achieve 100 percent fossil fuel-free status. Any plan that ignores that fact is simply magical thinking.

I think that last line pretty much summarizes my opinion of the 15 Demands and The Leap Manifesto. They ignore the laws of physics and show a profound misunderstanding of energy science. As such they represent nothing more than the magical thinking of a bunch of activists who have never actually had to hammer out how a system like the one they “demand” would be sourced, built and paid for. The authors of the Manifesto are well-meaning but appear to lack the real-world experience to understand that Canada is a HUGE country and building a trans-continental railway was an incredible achievement. The thought of connecting every community in Canada by rail (powered by renewables no less) doesn’t even warrant the description “pie in the sky” it is simply delusional.

Wednesday, September 9, 2015

On Wi-Fi, Electromagnetic Hypersensitivity and the Nocebo Effect


One of my fears when I wrote my previous post about Wi-Fi was that I was opening a Pandora’s Box on the whole field of electromagnetic fields and health. As I expected, shortly after I posted that blog a number of people tweeted to me explaining how wrong I was about Wi-Fi, with many describing stories of Electromagnetic Hypersensitivity (EHS). Well as my dad used to say: in for a penny, in for a pound. I may as well cover that topic as well. In this post, therefore, I will look into the topic of EHS and in doing so will re-visit the concept of the Nocebo effect.

The World Health Organization defines EHS as:

a variety of non-specific symptoms, which afflicted individuals attribute to exposure to EMF [electromagnetic fields]. The symptoms most commonly experienced include dermatological symptoms (redness, tingling, and burning sensations) as well as neurasthenic and vegetative symptoms (fatigue, tiredness, concentration difficulties, dizziness, nausea, heart palpitation, and digestive disturbances). The collection of symptoms is not part of any recognized syndrome.

Before I can go into a discussion of EHS, however; I will need to introduce a couple topics I have not yet covered in my blog: the double-blind study and the concept of a systematic review or meta-analysis.

In my previous post Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist I describe the field of epidemiology which the World Health Organization defines as the study of the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. In the field of epidemiology the most reputable testing is carried out through clinical trials. A clinical trial is a prospective study in which humans are exposed to “something” at the discretion of the investigator and followed for an outcome. The biggest problem with clinical trials is that they are conducted on humans and by humans. This is a problem because humans are not machines; we are a very social species who give off any number of non-verbal cues every time we interact. This is a problem in epidemiology because in order to confirm that an outcome of a study is due to the “something” in the study, we have to ensure that those very things that make us human do not influence the outcome. As a consequence, in the field of epidemiology randomized double blind placebo control (RDBPC) studies are considered the “gold standard” of studies.

In an RDBPC study both the subjects participating in the study and the researchers carrying out the study are unaware of when the experimental medication or procedure has been given. In drug tests this means splitting the participants into groups where some of the participants get the active ingredient (or medicine) and the other half are given a placebo (historically a sugar pill made to look like the medicine being tested) and ensuring that the treating physicians are not aware of which of the subjects got the real pill and which got the sugar pill. In the testing of EHS this means that neither the scientist doing the experiment, nor the subject of the test actually know when the subject of the test is being exposed to an EM field. As I will describe later, a lot of testing has been done on EHS using either double-blind or single-blind (the person getting the test does not know) methodologies and as I will discuss the results have been entirely consistent.

As I have mentioned previously at this blog, the statistics we use in science are very sensitive to population size (number of subjects tested). The more subjects tested, the more likely you will be able to identify a small signal or weak effect in a large population. The problem with clinical trials is that each individual study is limited by its budget, its geography and the number of subjects it can test. In a field like EHS there are hundreds of people all over the globe who claim to be particularly sensitive to EM fields. They can’t all be tested at the same time or in the same study so instead the literature is full of small studies of a handful of individuals. In order to take advantage of the strength of population statistics, scientists have developed the tools of the meta-analysis. A meta-analysis or alternatively, a review article, represents an attempt by one or more authors to summarize the current state of the research on a particular topic. In a meta-analysis the authors will often combine the findings from independent studies in order to enlarge the sample size in the hopes of identifying an effect that might have been missed in the individual studies included in the analysis.

This long introduction is intended to save me a lot of time because, like the study of RF in humans, there is a broad literature on EHS and numerous reviews and meta-analyses have been carried out. This is fortunate for me because that means someone else has done all the work for me. So let’s see what the literature says?

In 2005, Rubin, Munshi and Wessley conducted a Systematic Review of Provocation Studies on Electromagnetic Hypersensitivity. Their conclusion:

The symptoms described by “electromagnetic hypersensitivity” sufferers can be severe and are sometimes disabling. However, it has proved difficult to show under blind conditions that exposure to EMF can trigger these symptoms. This suggests that “electromagnetic hypersensitivity” is unrelated to the presence of EMF, although more research into this phenomenon is required.


based on the limited studies available, there is no valid evidence for an association between impaired well-being and exposure to mobile phone radiation presently. However, the limited quantity and quality of research in this area do not allow to exclude long-term health effects definitely.

In 2007, Oftedal et. al. conducted an RDBPC on mobile phones titled: Mobile phone headache: a double blind, sham-controlled provocation study. The results of that study:

The study gave no evidence that RF fields from mobile phones may cause head pain or discomfort or influence physiological variables. The most likely reason for the symptoms is a nocebo effect.

In 2008, Roosli conducted a systematic review on radiofrequency electromagnetic field exposure and non-specific symptoms of ill health. His conclusion:

This review showed that the large majority of individuals who claims to be able to detect low level RF-EMF are not able to do so under double-blind conditions. If such individuals exist, they represent a small minority and have not been identified yet. The available observational studies do not allow differentiating between biophysical from EMF and nocebo effects.

Between 2005 and 2010 there was a lot of hype on the topic of EHS and as a consequence a lot more research was carried out on the topic. As a consequence, in 2010 Rubin, Nieto-Hernandez and Wessley carried out an updated systematic review of provocation studies on Idiopathic Environmental Intolerance Attributed to Electromagnetic Fields (Formerly ‘Electromagnetic Hypersensitivity’). Their conclusion:

No robust evidence could be found to support this theory. However, the studies included in the review did support the role of the nocebo effect in triggering acute symptoms in IEI-EMF sufferers. Despite the conviction of IEI-EMF sufferers that their symptoms are triggered by exposure to electromagnetic fields, repeated experiments have been unable to replicate this phenomenon under controlled conditions.

Most recently, in 2012 Kwon et.al. did another review titled: EHS subjects do not perceive RF EMF emitted from smart phones better than non-EHS subjects. Their conclusion was like all the rest of the studies:

In conclusion, there was no indication that EHS subjects perceive RF-EMFs better than non-EHS subjects.

As you can see, the academic literature is essentially unanimous. Every case where a supposedly EHS sensitive individual was put under a double-blind procedure the result has been the same: the supposedly sensitive individual was unable to perceive an EM field at a rate higher than would be contributed purely by chance. Don’t even try to ask me about Dr. Havas and her study in the European Journal of Oncology. As described quite clearly at Skeptic North that was not a blind study and was clearly a case of someone not reading the warnings pamphlet that came with her heart rate monitor.

You will notice above that most of the reviews attribute the symptoms of EHS to the “nocebo effect”. I have written about the nocebo effect before at this blog but to summarize. The “nocebo effect” is the opposite of the placebo effect. While the placebo effect has the ability to help you feel better in the absence of any active ingredients, the nocebo effect has the ability to make a person feel poorly in the absence of any active stimuli. As described in this review paper the nocebo effect is not as well studied as the placebo effect but it has been demonstrated to be real.

It is important to recognize a couple things about the nocebo effect. First and foremost, people who “feel bad” or claim to be “sick” via the nocebo effect are neither lying nor are they fakers, rather they are doing one of two things. They are either associating actual symptoms from other causes to the “nocebo” or they are having phantom symptoms based on their minds playing tricks on them. There are any number of celebrated cases where people have be shown that their “illnesses” were all in their minds. By far the most entertaining one is described in this article from Daily Tech. In that case a community complained about EHS symptoms even though the radio towers supposedly causing the symptoms had been turned off during the time the community members claimed they were being made ill by the towers.

So to summarize, as I described in my previous blog posting, untold thousands of studies have been conducted on Wi-Fi and the results are clear, RF is not a serious human health risk. Rather, it is almost a perfect example, a de minimis risk (which I discuss in another blog post). As I discuss, a de minimis risk is a risk that, while it may exist, is too small to be of societal concern. EHS, meanwhile is a real problem but not one related to the presence or absence of RF fields. Instead it is related to the real concerns about Wi-Fi that have been spread by individuals who ignore the mountains of peer-reviewed research, meta-analyses and systematic reviews that demonstrate that Wi-Fi is not a risk to human health. EHS has, quite literally, become the textbook example of the nocebo effect. When a whole community can claim to be made sick by a transmission tower, that has been turned off, you have a classic case of individuals quite literally scaring themselves and their children sick.

Thursday, September 3, 2015

On Wi-Fi in Schools and the Precautionary Principle


I knew this day was coming. I wasn’t sure when, but I knew that at some point as a promoter of evidence-based decision-making I would have to take on the topic of Wi-Fi in schools at this blog. Well the new school year is here and the topic has started to bubble up to the surface in the local press and I have been asked to comment on it. Right off the top I want readers to know that this blog post will not go into detail about the research. I will provide links to lots of resources but want to look at this topic from a policy perspective, with a special emphasis on how the Precautionary Principle is misused in activist arguments.
Let's start with a common misconception; Wi-Fi is not a new technology. Rather, Wi-Fi is a new twist on an old technology: transmitting information via the radiofrequency (RF). Humans have been broadcasting radio and microwave transmissions across the planet for over a century.
As for health studies, according to the World Health Organization, over the past 30 years approximately 25,000 articles have been published on the biological effects and medical applications of non-ionizing radiation. RF is just another form of non-ionizing radiation.
If you are looking for detailed discussions about the science behind RF and cancer then there are a lot of good resources out there. The U.S. National Cancer Institute has a very good Question and Answers page, Health Canada has a Frequently Asked Questions page and the BC Center for Disease Control has a radiofrequency toolkit that everyone reading on the topic should look at.
As for the peer-reviewed science, the United Kingdom Advisory Group on Non-Ionising Radiation (AGNIR) put out a hefty report in 2012 that is worth a read if you have time to digest 300+ pages of detailed discussion and references on the topic. A very readable discussion of the topic of Wi-Fi activism in Canada was conducted by Bad Science Watch and further Canadian resources include Skeptic North’s take on the issue. As for Wi-Fi in schools and public places well Skeptic Blog has a good summary and Health Canada a good video on the topic.
Anti-Wi-Fi activists will point out that the International Agency for Research on Cancer (IARC) investigated radiofrequency electromagnetic fields as possibly carcinogenic to humans. The IARC Monograph on the Evaluation of Carcinogenic Risks to Humans provides a comprehensive examination of the topic and the IARC classified radiofrequency electromagnetic fields as a Group 2B possible carcinogen. The critical thing to understand is that Group 2B compounds are, by their very definition, not known to be carcinogens.
Group 2B is a category used when a causal association looks like it might be possible, but when other factors cannot be ruled out with reasonable confidence. Group 2B is, thus, a placeholder for compounds that haven't been shown to cause cancer but are of further interest for study. Some of these compounds, like acetaldehyde and benz[a]anthracene, will likely be determined to cause cancer but others like coffee, pickled vegetables and talc-based body powder, are much less likely to do so. My opinion, based on a mountain of peer-reviewed research, is that radiofrequency electromagnetic fields will be in the latter group and not the former.
I briefly mentioned Wi-Fi on Twitter yesterday and immediately an activist brought out their big gun: The BioInitiative Report. This report is very official-sounding but it has been debunked and discounted by every scientific body that has looked at it, from Australia to the European Union. The EMF & Health website has a whole section dedicated to it. I mentioned this fact and got directed to a single study that indicates the possibility that RF can cause a particular type of cancer. That study didn’t really bother me either.
Scientific research uses as its gold standard the 95% confidence interval (p<0.05). What this means is that if you replicate a study 100 times about five of the tries will give you a false positive (saying that a compound causes an effect when it really does not). Given the approximately 25,000 articles published on the topic, the absence of any toxicologically negative outcomes is statistically improbable.
What is important is to look at the number of positive studies when compared to negative ones. Moreover, a careful examination of the handful of positive studies shows that almost every one involved a particularly rare type of cancer and a minimal effect. This is the ideal scenario for a false positive. Population statistics break down when sample sizes are small and in the cases of most of these studies the number of affected individuals is tiny with respect to the general population. As I pointed out above, there is a copious body of literature that says that Wi-Fi is safe.
To back up that copious body of literature consider that from an epidemiological perspective we have been engaged in a massive human test trial for the last 70+ years. From radar operators during the Second World War to children with cell phones in 2015, billions of humans world-wide have been exposed to varying concentrations and doses of microwave and radio wave radiation. Just look at your cell phone right now, almost anywhere you go you are in range of a Wi-Fi router and you are almost always in range of a radio signal. The fact is, we have not seen spikes in any of those rare cancers purportedly related, via these questionable studies, to exposure to RF.
Certainly we hear about a single police officer here or a woman there who got a suspicious cancer but as I point out in my post Risk Assessment Epilogue: Have a bad case of Anecdotes? Better call an Epidemiologist that is why we have epidemiologists. Epidemiologists look at all the anecdotes and see if there is some underlying trend. The results are categorical: RF is not a serious human health risk. Rather, it is almost a perfect example a de minimis risk (which I discuss in another blog post How Big and Small Numbers Influence Science Communication Part 2: Understanding de minimis risk). As I point out in that blog posting a de minimis risk is a risk that, while it may exist, is too small to be of societal concern.
So how does an activist try and sell you on making a societal change when dealing with a de minimis risk? The answer is almost always: the Precautionary Principle. Activists use the Precautionary Principle because it sounds good and most people don’t actually know what it says. In a previous post How Big and Small Numbers Influence Science Communication Part 3: Understanding "Acceptable" Risk I introduced readers to the real definition of the Precautionary Principle. The actual Precautionary Principle was defined as Principle 15 in the Rio Declaration which states:
"In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”
The Precautionary Principle does not say that all risk is bad risk and that all risks must be avoided because that is not a realistic way to run a society. Getting out of bed in the morning poses a non-zero risk of slipping and breaking your neck. Using the Wi-Fi activist view of the Precautionary Principle we would have to ban all beds to avoid that potentially fatal risk. Instead of requiring “no risk” in the real world we ask: what is an “acceptable risk”?
As I have written previously since we live every day in a world full of risk, we need to figure out how to deal with and understand the risk. That is why we (Canadians) hire epidemiologists and other scientists at places like Health Canada: to help us understand and differentiate between acceptable and unacceptable risks. The reason smart meters, Wi-Fi and cell phones are of little concern to Health Canada is that these technologies are not some mysterious things for which the Precautionary Principle may apply. Transmitting information on the microwaves spectrum is a mature technology that we have used for almost 100 years. The Precautionary Principle does not apply because we have almost 25,000 scientific studies that each individually say that RF exposure is probably safe; but when you repeat that “probably” 25,000 times what it really means is that you are safe.
The Precautionary Principle also considers the consequences of actions. In the case of W-Fi in schools you have a real and obvious benefit. Students with access to Wi-Fi have access to more teaching resources and a better education experience than students without. If you want to eliminate Wi-Fi in the classroom you either accept that you are going to give your kids a lower standard education or you have to hardwire every classroom in every school. The hardwiring of schools is often thrown out as if it were a viable alternative, but the costs of hardwiring every classroom in British Columbia are simply unaffordable. Moreover, it is not as if the schools are RF-free zones to begin with. An informative report on CTV demonstrated quite categorically that schools without Wi-Fi set-ups can have higher levels of Wi-Fi running through them than schools with Wi-Fi. Heck any parent who has attended a Christmas concert in their kids gym knows that they can typically find a at least a dozen Wi-Fi networks on their cell phones.
So to respond to the obvious activist rebuttals to this piece: Wi-Fi is not some brand new technology that we must fear, it is simply a new spin on an old technology. Wi-Fi is not a carcinogen, rather after 25,000 studies on the RF spectrum and its effect on humans and after the exposure of billions of humans to RF the best scientists can say is that it might be responsible for a handful of rare cancers. If RF is a cancer risk, it is one below the de minimis threshold: one that may exist but is too small to be of societal concern. As for applying the Precautionary Principle, that is just a red herring. The Precautionary Principle does not say that you accept no risk only that you factor risk and rewards into your calculations and in this case the risks are negligible and the rewards significant. Applying the Precautionary Principal it is an easy call to keep Wi-Fi in the classrooms.