Tuesday, June 30, 2015

On RCP8.5 and "the Business as Usual" Scenario - Different beasts not to be confused

This weekend I finally got the opportunity to read Dr. Matt Ridley’s recent essay “The Climate War’s Damage to Science” in the Quadrant Online. As a fellow Lukewarmer I try to keep abreast of Dr. Ridley’s essays and articles and am seldom disappointed by his prose. This article, like most of his work, made for a very interesting read and I would recommend it to anyone interested in the topic of climate change politics. While reading the essay one particular paragraph jumped out at me. The paragraph described one of the Representative Concentration Pathways (RCPs) used in the IPCC Fifth Assessment Report.  In the essay Dr. Ridley wrote: 
What is more, in the small print describing the assumptions of the “representative concentration pathways”, it admits that the top of the range will only be reached if sensitivity to carbon dioxide is high (which is doubtful); if world population growth re-accelerates (which is unlikely); if carbon dioxide absorption by the oceans slows down (which is improbable); and if the world economy goes in a very odd direction, giving up gas but increasing coal use tenfold (which is implausible).
This paragraph reminded me that I had previously committed to writing about the IPCC RCPs and in particular about RCP8.5 which is often referred to, incorrectly, as the “Business as Usual Scenario”. The reason for my interest in this rather anodyne topic is that it actually represents a quite excellent example of how science is misrepresented to the public in the climate change debate.
As I describe in my post “Does the climate change debate need a reset? - on name calling in the climate change debate” one of the critical battles in any debate is control over the labelling of the actors. If you can apply the best possible label to yourself and the least agreeable label to your opponent you immediately gain the upper hand. In the climate change debate, the “Business as Usual” label has been used more times that I can count with activists from the folks at Skeptical Science to the Suzuki Foundation, and from the Pembina Institute to 350.org  all finding some way to slip that phrase into their calls demanding immediate action (and of course donations to their cause). As this post will demonstrate, however, the “Business as Usual” descriptor used by the activists in the climate debate is nothing of the sort. Rather it is an artifact from earlier versions of the IPCC reports and was conspicuous by its absence in the most recent (Fifth Assessment) report.
Let’s start with some background. As anyone who has read my writing knows one of the ways to make science more reader-friendly is to use analogies and personal anecdotes. Of course the risk with analogies is that a bad analogy can distract from your narrative. Similarly, anecdotes can personalize your writing and make it more approachable but anecdotes are only valuable if they are subsequently supported by actual data since the old saw goes “the plural of anecdote is not data”. In this vein, the earliest IPCC reports used “Scenarios” to inform their modelling exercises. As they put it:
Scenarios are images of the future, or alternative futures. They are neither predictions nor forecasts. Rather, each scenario is one alternative image of how the future might unfold. A set of scenarios assists in the understanding of possible future developments of complex systems. Some systems, those that are well understood and for which complete information is available, can be modeled with some certainty, as is frequently the case in the physical sciences, and their future states predicted. However, many physical and social systems are poorly understood, and information on the relevant variables is so incomplete that they can be appreciated only through intuition and are best communicated by images and stories. Prediction is not possible in such cases (ref).
I have neither the time nor the expertise to discuss the scenarios is a manner worthy of them and so will leave that to Dr. John Nielsen-Gammon from Texas A&M University who has prepared a brief breakdown on the history of the different scenarios used by the IPCC (ref). He also describes the process by which the most recent IPCC Report eliminated these scenarios. The reason for this is simple, by 2014, the older scenarios had outlived their usefulness. The public was no longer in need of spoon-feeding and instead the RCPs were rolled out. Four RCPs were generated for the Fifth Assessment report representing four different forcing pathways. A simplified definition of a “forcing” is the difference between the energy from the sun absorbed by the earth and that radiated out into space (ref). The four RCPs were labeled by the approximate radiative forcing (in watts per meters squared) expected to be reached by following the respective pathways during or near the end of the 21st century. The four pathways are RCP2.6, RCP4.5, RCP6.0, RCP8.5 (ref). The roles of the RCPs, therefore, were not to inform the public as much as to inform the modellers in the IPCC process. Specifically, they were intended to drive the climate model simulations that formed the basis of many of the future projections in the most recent IPCC report (ref). To put it another way, RCP8.5 was a pathway designed to model a set of conditions that could lead to a world where climate forcing by the year 2100 reached 8.5 watts per meter squared. It was essentially designed to provide a worst-case scenario [used in its traditional literary sense] if everything in the world went sideways or backwards (as I will detail later).
The problem with the IPCC retiring its old scenarios is that a lot of activists were very happy with the old paradigm and had no desire to change their tunes. They wanted something that they could sink their teeth into and use to scare the public and politicians. Since the IPCC had taken away their well-established tools they appear to have decided to re-label one of the new tools to suit their purposes. So they affixed the retired “Business as Usual” scenario label (some use the term “status quo”) to RCP8.5 and continued on their merry way scaring up new funding. The only problem is that, by definition, RCP8.5 was not a “Business as Usual” scenario, rather it was
developed to represent a high-end emissions scenario. “Compared to the scenario literature RCP8.5 depicts thus a relatively conservative business as usual case with low income, high population and high energy demand due to only modest improvements in energy intensity.” (Riahi et al. 2011 ref) RCP8.5 comes in around the 90th percentile of published business-as-usual (or equivalently, baseline) scenarios, so it is higher than most business-as-usual scenarios. (van Vuuren et al. 2011a ref)) - (Text ref)

What the activists call: “Business as Usual” actually represents the 90th percentile of the scenarios prepared for the IPCC that involved little change in environmental and economic policies (sometimes referred colloquially as the “no significant action” scenarios). These scenarios represented the worst of the worst where governments and industry did not do anything to improve their lot. As such the no significant action scenarios could only be described as “business as normal” if you happened to be living in 1990 or 1996 when the IPCC prepared its original couple reports. That would be before we had spent 20 or so years learning about climate change; before the Kyoto Protocol and the world-wide drive to renewable energy; before the discovery of tight shale gas and the move away from coal as the primary source of future energy plants in much of North America, Europe and Asia. To put it simply, being at the 90th percentile of that group put you in pretty impressive company and does not relate to anything that anyone in the real world would actually expects to happen. Rather, in a relative sense as the 90th percentile of all those earlier estimates, it would be the scenario that comes just below the scenario where Godzilla emerges from the sea to burn Tokyo and the scenario where the atmosphere spontaneously combusts from the endless bursts of Hiroshima-bomb-powered forcings.
I have made a pretty bold statement that RCP8.5 is not really relevant in a real-world sense and I suppose it is time to back that up with data. In order to understand how RCP8.5 has already been trumped by history you need to look at the history and contents of RCP8.5. Readers interested in the details should read the paper by Riahi (et. al. 2011 ref). Dr. Riahi is one of the authors of the original IPCC Scenarios upon which RCP8.5 was based in 2007 (ref). At that time, consistent with the education theme each IPCC Scenario had a “Storyline”. The storyline described the assumptions of the scenario in easy to understand language. The “Storyline” for RCP8.5 originates from Scenario A2 in the Third IPCC Report but was further refined in Riahi (et. al. 2007 ref) as A2r. As recounted in the Third IPCC Report (and detailed in these references (ref ref and ref) the A2 storyline was characterized by:

·       lower trade flows, relatively slow capital stock turnover, and slower technological change;
·       less international cooperation than the A1 or B1 worlds. People, ideas, and capital are less mobile so that technology diffuses more slowly than in the other scenario families;
·       international disparities in productivity, and hence income per capita, are largely maintained or increased in absolute terms;
·       development of renewable energy technologies are delayed and are not shared widely between trade blocs;
·       delayed land use improvements for agriculture resulting in increased pollution and increased negative land use emissions until very late in the scenario (close to 2100);
·       a rebound in human population demographics resulting in human population of 15 billion in 2100; and
·       a 10 fold increase in the use of coal as a power source and a move away from natural gas as an energy source.

Looking at what the activists have labelled the “Business as Usual” scenario we see a slew of assumptions that are anything but business as usual. It is generally accepted in the demographic circles that the human population will max out at between 10 and 12 billion (ref) so the population estimate is off by around 25%. Rather than trade blocs hoarding technologies we are living in an increasingly international world where technological improvements move at the speed of the internet and new and improved renewable energy technologies are both being developed and shared worldwide. Coal use is decreasing as a percentage of our energy supply and the shale revolution and access to cheap and plentiful natural gas has resulted in an international market for liquefied natural gas and increases in energy intensities not decreases. To put it bluntly, virtually every one of the assumptions of the RCP8.5 have been demonstrated to be categorically wrong. No surprises here, when the IPCC picked a worst case scenario they went full bore on that approach.
I see I am running long so let’s summarize this post. When you see an abstract where the authors say something like:
We show that although the global mean number of days above freezing will increase by up to 7% by 2100 under “business as usual” (representative concentration pathway [RCP] 8.5), suitable growing days will actually decrease globally by up to 11%.... tropical areas could lose up to 200 suitable plant growing days per year....Human populations will also be affected, with up to ~2,100 million of the poorest people in the world (~30% of the world’s population) highly vulnerable to changes in the supply of plant-related goods and services (ref).

It is time to gently close the journal and back away slowly so as to not attract the author’s attention. By basing their study on RCP8.5 and specifically referring to it as the “business as usual” scenario the authors have told you all you need to know about the reliability of their paper. Similarly when an activist talks about “business as usual” in their sales pitch, it is time to put your wallet back in your pocket. If you are so inclined then it is it is time for you to find a group that is more serious about improving our planet and more in keeping with what the IPCC actually has to say. RCP8.5 is not a business as usual scenario but rather a future scenario that has been soundly invalidated by the conditions in the present.

Saturday, June 20, 2015

So do you really need an $8000 water treatment system in Langley?

I figure it is time for a change of pace post on this blog. For the last little while I have gotten into some relatively heavy technical stuff that has scared some of my most faithful readers (including my mom) away. Today I am going to discuss something that is much more approachable and applicable to our daily lives in a post about water quality. The idea for this post comes from an unexpected visit I had this week from a salesman for a water treatment system company. The visit started with mis-direction, was filled with mis-information and ended with me realizing how easy it is for someone using the right combination of words to scare families into buying an expensive system they simply do not need.

The visit was preceded by an official sounding telephone call. Months ago my wife got a telephone call from someone purporting to be associated with the Township (we live in the Township of Langley) who asked her a bunch of questions about our water. Tuesday night we got a follow-up call from a woman who said she was following up on the survey. She talked to my wife and made it sound as if the Township had hired their company to come out and sample our tap water. There was no indication that this was a sales call but rather she made it sound like we were being asked to do our civic duty by allowing their professional to come into our house to do some testing. Since it was a rare night when we didn’t have any children’s sports scheduled, and I was going to be home, my wife suggested that they come that night and we scheduled a time. At the end of the phone call my wife was asked our occupations (purportedly for demographic purposes), she said she was a teacher and I was a Chemist. Now any sensible company would have had red flags flying upon hearing my profession (like the lawyer in the jury pool who gets tossed without any other consideration) but maybe the woman thought I was a pharmacist so who knows. 

At the scheduled time a gentleman arrived at our door. He was soft-spoken and wearing a pair of khakis, looking every bit the part of an environmental technician (and I should know) with a clamshell briefcase full of supplies. He gave me a card with his company’s name (for the purpose of this blog I am not going to use the company or tester’s name as the point is not this person in particular but the approach in general) and we welcomed him in. The card indicated that the company was a “Gold Seal Approved – Canadian Water Quality Association Member” which sounded sufficiently official to belay our concerns. He efficiently set up his testing station and cleaned his supplies with a bottle of water marked as “reverse-osmosis water” (ROW) all the time chatting about water. I moved back to give him space (I didn’t want to interfere with his testing) and instead he invited my wife and I forward. First he pulled out a hand-held total dissolved solids (TDS) meter. A TDS meter measures the level of TDS in your water and is a pretty straightforward tool. He first measured his ROW water and got a reading of 4 parts per million (ppm) which is a very low reading consistent with a distilled/RO water. He then measured our tap water and got readings a reading of 14 ppm, he then re-filled the water and got a 21 ppm and started tut-tutting. At this point I started to tweak into the fact that this was not a Township water test as he didn’t have a notebook and wasn’t recording anything anywhere. Now as a Chemist who used to do this type of testing for a living (I now have technicians who do my testing for me) I can say TDS values from 14 to 21 from our tap is absolutely tremendous number. The Canadian Drinking Water Guidelines are 500 ppm (ref) and at 21 ppm I am almost ready to retire my Brita. The tester however was quite concerned and pointed out how our value was “elevated” I almost chocked when he used the word “high” at one point. He pointed out that the water system he uses can get your number down to single digits (as low as 1) which he made sound pretty significant. At this point I smiled to my wife as we knew this was no water test and we got ready for the sales pitch.
The next test was the addition of a single drop of “Agent #5” (I admit I might have the number wrong) which turned the water purple. Our water was darker purple which he informed us was a bad thing. He then cleaned his glassware and re-filled the two beakers (one with his water one with ours) and added two drops of our dish soap. He shook the two beakers vigorously and lo-and-behold his had more suds. He added two more drops of our soap to our water and shook it and the suds were still not as fluffy. He attributed this to the TDS in the water. In this he was almost right in that one benefit of reverse osmosis is that it softens water a tiny bit and anyone who has dealt with water softeners knows, the softer your water the better your soap bubbles. That being said, if your water is too soft it can seem almost impossible to wash the suds off your body. So now we have had three tests none showing anything slightly wrong with our water. He then moved to our soap pointing out that our dish soap was not organic and likely had sulfonic acids in it. How he knew is an interesting question as our dish soap is from a container without any labels, but even then sulfonic acids are a feature of detergents that are completely unspectacular. Soaps need surfactants to deal with oils and it is just one of many alternatives. It is like being made to fear your vinegar because it has acetic acid in it...for the non-chemist vinegar is simply a mix of acetic acid and water. He then informed me that the chlorine in municipal water mixes with sulfonic acid to make mustard gas, which of course, is chemically impossible.

His next line of discussion was the chlorine in our municipal water and how this chlorine can make people sick. He pointed out that when we shower we are exposed to chlorine gas. This is a common ploy and one that while chemically true is completely deceiving. I discussed the concept in detail in an earlier post: How Big and Small Numbers Influence Science Communication Part 2: Understanding de minimis risk how minimal concentrations of compounds are essentially ignored by your body. Except in exceptional cases, like when the municipality is flushing the lines with chlorine (which they announce in the papers), the amount of chlorine gas in your shower would not harm a fly, let alone a human. At this point it was clear to him he was losing us. I kept turning and smiling to my wife when he said something chemically impossible/improbable so he cut to the chase and informed us that he was a representative of a water treatment company and provided us with a quotation for a system at our water intake and a second system by our kitchen sink, total cost only $8000 and as a bonus they would supply us with all the soap (organic soap he said) for 5 to 7 years....I never figured out why it was such an imprecise number. He showed me the soap samples and the ingredient list looked no different from the list from your grocery store...although one of the soaps did include goat milk? He then told us about a recent stop where he had visited a $5 million mansion with a reverse osmosis system treating well water that had TDS levels of 140 ppm...for shame...for shame...and that he was going to convince them to buy a better system from him. As I mentioned above the water quality guidelines are 500 ppm so 140 ppm from well water is pretty darn good. After this discussion it was time to thank the representative for his visit and welcome him to leave.

Once the representative was gone, I had a sit-down with my wife who asked me about all the things she had been told. Having lived with me long enough, she knew that much of what he had said was wrong but did not know in what way. It struck me that this gentleman’s scientific sounding patter would likely sway the uninformed and could convince someone that they needed a water treatment system when all the data he presented proved that we had no use for such a system. A Brita water system would get our drinking water as clean as theirs and frankly leaving a pitcher of water in the fridge does just as good a job at getting rid of chorine as either their system or a Brita. We live in a society where we are constantly informed that everything around us is unsafe. Chemicals are not scary things, they make up everything around us. For a really easy read on the topic I would direct you to a post at “the Logic of Science” titled 5 simple chemistry facts that everyone should understand before talking about science. As for your drinking water? If you live in the British Columbia lower mainland and get your water from the Metro Vancouver water system then no you don’t likely need a reverse-osmosis system costing $8000 when a $1.50 water pitcher you leave in your sink will give you essentially the same quality water.  

Wednesday, June 17, 2015

Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part II What about those pesky rare earth metals?

My last post: Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part I Why no nuclear power? introduced my readers to the report in Energy & Environmental Science titled: 100% clean and renewable wind, water, and sunlight (WWS) all-sector energy roadmaps for the 50 United States (100% WWS USA hereafter). In that post I discussed the confusing decision by the authors (Jacobson, Delucchi et al. at http://thesolutionsproject.org/) to consciously ignore the option of nuclear power in their vision for a 100% fossil fuel-free future for the USA and the world. Today’s post will follow-up on my previous post by looking more closely at some of the assumptions underlying this proposed roadmap for our future. As I pointed out in my previous post, David Roberts at Vox.com likened any proposed efforts to achieve a 100% fossil fuel  free future based solely on wind, wave and sunlight as requiring a World War II–scale mobilization. My intention in this post is to demonstrate that the proposed approach cannot be achieved, as designed, even with a World War II-scale mobilization. Rather, I intend to demonstrate that even a World War II-scale mobilization by the United States will fail due to an absence of the raw resources (specifically rare earth metals and lithium) needed to achieve the 100% WWS USA paper goal. 
As discussed in my last blog posting, the most evident failing of the 100% WWS USA paper is that it lacks the critical data necessary to demonstrate how they will achieve their goal. That is, they describe in detail just how much tidal energy they will need to achieve their goal but they don’t provide any details as to how to ensure that the raw materials necessary to produce the technologies are available. Instead, like the case with nuclear power, all the critical details are in Jacobson and Delucchi’s earlier pair of papers titled “Providing all global energy with wind, water, and solar power”, Part I and Part II (called 100% WWS World Part I and 100% WWS World Part II hereafter). Thus the 100% WWS USA paper provides a broad overview (a strategy) but it does not provide a method to achieve that goal (the logistics). Keeping with the military theme of this post I will remind my readers of the old military saw: “strategies and tactics win battles but logistics win wars”. Well Jacobson and Delucchi’s work is strong on strategies but exceedingly weak on the logistics. So let’s start looking at the logistics.
I really couldn’t go much further in this post without pointing out my previous post On renewables and compromises Part II Rare earths in renewable technologies where I discuss rare earth metals (called rare earths hereafter) and their importance for renewable energy technologies. As I point out in that post, rare earths are the elements that have allowed us to develop all these incredible renewable energy technologies. Neodymium (Nd) is the "magic" ingredient that makes high-power permanent magnets a reality. Lanthanum (La) and Cerium (Ce) are what make catalytic converters work. Your cell phone, your LCD screen, your hospital's PET scanner all depend entirely on the existence of rare earths. To be clear, we are not talking about traces of the stuff either. A single large wind turbine (rated at about 3.5 megawatts) typically contains 600 kilograms of rare earth metals (ref). European Parliament researchers have established that major deployment of photovoltaic cells and wind turbines may have a serious impact on the future demand of 8 significant elements: Gallium (Ga), Indium (In), Selenium (Se), Tellurium (Te), Dysprosium (Dy), Nd, Praseodymium (Pr) and Terbium (Tb) (ref - admittedly some of those are not rare earths but are mined in similar mines/geologic formations).
So ignoring the issues with nuclear power in the 100% WWS USA paper, another significant problem with the 100% WWS World Part I paper is that the authors gloss over concerns about supplies of rare earth metals. Instead they appear to pretend that we have essentially limitless supplies of rare earths or where supplies are limited that we can easily access the complete planetary resource of these materials with little effort. In their 100% WWS World Part I paper Jacobson and Delucchi note that the annual production of Nd oxide (needed for windmill turbines and anything that needs a permanent magnet) was 22,000 metric tonnes in 2008. They then point out that their 100% world scenario would require 122,000 metric tonnes/year of Nd oxide. That is quite a shortfall considering we aren’t making any serious efforts to address that shortfall. While most manufacturers of electric vehicles rely on Nd in the same paper they wave away the need for Nd in electric vehicles by stating that we will come up with an alternative for Nd, like Tesla does using Lithium (Li). The problem is that by doing so they just punt the ball down the road since if we are not relying on Nd we are stuck relying on another limiting component Li (to be discussed later). Going back to Nd, Jacobson and Delucchhi wave their hands and look at the global Nd reserves. They suggest that the global reserves can handle their usage numbers for up to 100 years at which point the world will be out of Nd. The question never answered is whether the entire world is going to abandon their historic concerns and frantically mine every ounce of Nd they can find? In my earlier post I point out that any attempt to ramp up mining capacity will require significant political and ecological compromises which might turn out to be a bit challenging. Moreover, some nations may decide that they have other domestic uses for Nd and don’t want the entire planetary supply used to provide the first world with wind turbines?
Having talked about the big name rare earth (Nd) let’s talk about some of the lesser known but equally important ones. Many of my readers will remember that old quotation attributed to Benjamin Franklin that told of “how for want of a horse-shoe nail a kingdom was lost”. Well in the world of renewables that horse-shoe nail is likely the rare earth element Dysprosium (Dy). I will venture a guess that 99.99% of my readers could not place this element on a blank periodic table (I couldn’t and I once had to memorize the periodic table to pass an undergraduate chemistry course). Dy is a critical component of the permanent magnets used in wind turbines and electric vehicle engines and unlike Nd it appears in rare earth deposits in very low concentrations (ref). Over 99% of the world supply of Dy comes from Chinese sources (ref) and under current use scenarios China estimates it has about a 15-25 year supply of Dy (ref note this ref is a pdf file that needs to open on your computer). Because of this, the compound is the top rare earth metal on the US Energy Critical Materials Strategy list (ref) close behind are Nd, Europium (Eu), Te and Yttrium (Yt). Given its importance, and limited supply Dy alone has the potential to upset Jacobson and Delucchi’s version of a clean energy future. Certainly, if necessary, alternatives to Dy will be identified. But it is unlikely that any alternative will provide the efficiency that Dy does in permanent magnets which means that magnets without Dy will be less efficient and thus will not able to produce energy at the rate required to meet their future use scenario. Did you see how that worked? for want of Dy a permanent magnet was lost...for want of a magnet all turbines were lost, for want of all turbines a future scenario was lost. For those of you interested, I strongly advise reading how the US Department of energy is planning to deal with future shortages of these critical rare earths (ref). As I note above, Dy is not the only critical rare earth that is not being produced in any reasonable volumes in the Western world. Eu, Te and Y are also critical components of most of our major energy plans and at this time we simply lack any reasonable supply for them outside of China. What every environmentalist must understand is that any serious effort to move to a 100%  renewable future can only be achieved if we make a conscious and concentrated effort to locate, mine and refine rare earth metals in the Western world.
Having discussed rare earth metals, let’s consider the major alternative presented by Jacobson and Delucchi: lithium. As any follower of modern tech trends will point out, Tesla is staking its battery business on lithium and cobalt cathodes and NCA ( nickel -cobalt- aluminum oxide) cathodes (ref). This is pretty much what Jacobson and Delucchi suggest will be one solution to the shortage of Nd.  The problem is that each battery pack can contain a lot of Li. While Tesla has kept their numbers under wraps it has been estimated that each battery pack in one of those Tesla S 7000 powerwalls uses about 21.4 kg of Li (ref). In 100% WWS World Part I Jacobson and Delucchi estimated that the production of only 26 million electric vehicles would require 260,000 metric tonnes of Li. They point out that at that consumption level we would exhaust the current world reserves of Li in less than 50 years. While 26 million electric vehicles seems like a lot that is only half of the vehicles produced in the world on a yearly basis. Under their 100% WWS USA scenario Jacobson and Delucchi talk about electrifying virtually every mode of land transportation. That would mean a lot more than 260,000 metric tonnes of Li a year and that is only for electric vehicles. It completely ignores any other battery (like the Tesla wall units or even rechargeable AA’s) that might be used to help store all that solar energy that is being collected during the daytime but intended for use once the sun goes down. Jacobson and Delucchi point out that we can always extract Li from seawater; but they also point out that seawater extraction is a very energy intensive process. That energy has not been included in any of their energy budgets. So you see once again the picture looks fine from a distance but once you look up closely you see all these little flaws and like a knitted sweater, once you start pulling at the loose strings things start falling apart.
Well once again a post has got away from me. I was going to go on to discuss platinum but at this point that would simply be overkill. Rare earth metals and lithium form what we in chemistry call a rate-limiting step in the movement towards a 100% fossil fuel free future. Unless and until we can figure out some way to speed up or go around that rate-limiting step the grandest of plans is going to come crashing down to earth in the cold, hard light of day.

Tuesday, June 16, 2015

Deconstructing the 100% Fossil Fuel Free Wind, Water and Sunlight USA paper – Part I Why no nuclear power?

Last week there was quite a stir as a big report came out in Energy & Environmental Science titled: 100% clean and renewable wind, water, and sunlight (WWS) all-sector energy roadmaps for the 50 United States (100% WWS USA hereafter). The report was picked up by all the normal sources and given a lot of play in the press. It being 2015, the paper even has an interactive website at http://thesolutionsproject.org/. Now as regular readers of my blog know, I did an analysis of what it would take to get British Columbia to a 100 % Fossil-fuel free state and the results were not pretty. I calculated that it would require the energy produced from the equivalent of approximately 12 Site C Dams to get us there and that did not seem terribly promising. Needless to say the idea the United States could achieve a 100% fossil fuel free status was very appealing to me but I was skeptical. Many of the tweets I read made it out to sound relatively simple but one of the bloggers I have come to trust (David Roberts at Vox.com) suggested that it might not be quite that easy.  He likened it to a World War II–scale mobilization which sounded about right. That being said, I decided to dig a bit deeper into the numbers to see for myself.

The first thing I noticed about the paper was that the names of the first two (lead?) authors (Jacobson and Delucchi) were very familiar to me. For those unfamiliar with these two, Jacobson and Delucchi prepared a similarly-themed pair of papers titled “Providing all global energy with wind, water, and solar power”, Part I and Part II (called 100% WWS World Part I and 100% WWS World Part II hereafter).  I had always meant to dig more deeply into those papers and apparently I will be getting a chance to do that now because the 100% WWS USA paper relies heavily on those two papers for many of its assumptions and raw data.

One of the most interesting features of the 100% WWS USA paper is that it categorically shuts off the option of nuclear fission as part of the energy mix. The basis for this dismissal is an interesting one and almost entirely free of any legitimate concerns about nuclear energy itself. Sadly for the casual reader, the basis for this dismissal is pretty hard to track down. The 100% WWS USA report very briefly discusses why nuclear energy has been summarily dismissed and does so by referring the readers back to two documents. One is the 100% WWS World Part I paper (above) and the second is a paper prepared by a similar batch of authors lead by Jacobson and Delucchi (Examining the feasibility of converting New York State’s all-purpose energy infrastructure to one using wind, water, and sunlight hereafter 100% WWS NYS). The inclusion of the second reference is a questionable one as the 100% WWS NYS paper doesn’t actually provide any original analyses about nuclear power. The sole useful reference to nuclear power simply states “Jacobson and Delucchi (2011) explain why nuclear power and coal with carbon capture are also excluded.” Now you can probably guess what I a going to tell you. Yes, Jacobson and Delucchi  (2011) is indeed “100% WWS World Part I”. So in the 100% WWS USA paper they cite two sources to explain why nuclear power is not appropriate for use in the United States. Both sources represent the author’s own work and one is simply a circular reference driving back to the other. As an outsider it looks a lot like they are padding the impact factor of their earlier works while making the average reader believe that their claim is supported by multiple independent lines of research. Going back to the source (100% WWS World Part I) we discover that the basis of the exclusion of nuclear from the mix is discussed but its primary technical basis is derived from a single report prepared by, yes you guessed it, Marc Jacobson and titled “Review of solutions to global warming, air pollution, and energy security”(hereafter Jacobson 2009).

Jacobson 2009 is worthy of an entire blog series of its own because the best I can say is that it is an interesting paper to read. In the paper Jacobson creates a unique scale to define which technologies would make the cut in a future energy mix. I won’t go into detail about all the assumptions that inform the table that I find questionable but for the interested reader I direct you to read the paper and see for yourself. For instance Jacobson indicates that wind energy (a truly dispersed energy source) would have the lowest physical footprint of all potential energy sources because he calculated the footprint of a wind power station to include only “the tower area touching the ground”. Under this approach the Buffalo Ridge Wind Farm in Minnesota which covers 42,800 acres, and has a direct physical project footprint of 77 acres, would occupy less "physical space" than a small city block. Even more amusing is the fact that he classifies wind as having the highest “normal operating reliability” while nuclear is in the middle of the pack. Having written a lot about wind in the last year I can state quite comfortably that the only thing reliable about wind power is that it is reliably absent for a large percentage of the generating year. It may be possible to smooth out reliability by putting enough plants in enough areas to allow for cross-connections but even that has a limited capacity to deal with low wind scenarios (see this ref for a breakdown for Europe). In the same section Jacobson downgrades nuclear energy’s reliability because nuclear plants can have “unscheduled outages during heat waves”. This ignores the realities that heat waves typically involve an absence of wind so while the nuclear plant may have issues relating to over-demand at the same time the wind is sitting idle completely unable to provide supply. Jacobson goes on to point out the actual reliability statistics that indicate that nuclear is a very reliable energy source but discounts those statistics for his subsequent data aggregation.  

 Jacobson compiles all his data into a single table (Table 4) where he rates/ranks the relative energy sources in order to demonstrate that the technologies he does not like are not viable for use. A brief look at Table 4 shows that nuclear fails the grade due to the risk of nuclear proliferation, some very interesting assumptions about future deaths attributable to nuclear proliferation (threat of nuclear war), thermal pollution from cooling tower return water (which he doesn’t really explain but assumes is a terrible thing) and the potential for disruptions to power supply by terrorists? Remember this paper is serving as the basis for a decision in all his subsequent papers (including 100% WWS USA) to dismiss nuclear energy as an alternative for future energy needs. So yes, you read that right, one of the primary drivers for discounting the use of nuclear energy in the United Sates in the 100% WWS USA paper includes the risk of nuclear proliferation associated with the facilities. Apparently the USA is not a nuclear power and therefore we run the risk of giving the Americans the bomb if we allow those scary folks in Idaho to use nuclear power? The same goes for nuclear powers like the French, the English the Russians, the Chinese, the Indians not to mention the entirety of the NATO alliance and the dozens of countries that have safely used nuclear power for generations without building a bomb. Can you imagine a more ridiculous basis for deciding to omit such a critical energy source from the North American power grid? It is almost as if Jacobson and Delucchi have something against the use of nuclear power and are simply looking for an excuse to exclude it from the mix.  

Admittedly, the 100% WWS World paper doesn’t rely entirely on Jacobson’s 2009 paper to dismiss nuclear energy. It also relies on papers by Benjamin Sovacool and Manfred Lenzen. Regular readers of my blog will remember Benjamin Sovacool. I wrote about him in a couple posts Wind Energy and Avian Mortality: Why Can't We get any Straight Numbers? and When peer-review is not enough - On estimates of avian deaths attributable to coal and nuclear facilities. He was the gent who derived an avian mortality rate for nuclear plants across the US by extrapolating the results from four sources where the biggest influence was actually a nearby fossil fuel plant. In doing so he extrapolated an avian nuclear apocalypse essentially out of thin air. I do not have time to deconstruct the Sovacool 2008 paper so I will leave that to the folks at RationalWiki (ref) who demonstrate that by triple counting a report by Jan Willem Storm van Leeuwen (ref) and ignoring a number of other papers Benjamin Sovacool manages to turn nuclear energy into a bulk emitter of carbon to be shunned. Suffice it to say that the 100% WWS World Part I paper could have chosen any number of meta-analyses to establish the carbon emissions of nuclear energy and the two they chose are arguably the most egregious outliers from the peer reviewed literature.

I think I am done banging this drum. It is quite clear that in the 100% WWS USA paper the authors did not want to include nuclear power in the mix. Based on their previous output, that appears to have been a conscious decision on their part. Please let me be clear here, it is not an inherently bad decision. The authors of scientific papers often make specific decisions in order to do interesting research. The problem, in this case, is that instead of saying outright that they are excluding nuclear power to provide for an interesting research perspective they do so in a manner that smears nuclear power. The same authors who were willing to distinguish to the decimal point the percentage of energy you would need to rely on from tidal turbines in California, were unwilling to distinguish between the risk of nuclear proliferation based on the development of nuclear power plants in North Sudan and those in North Dakota? Going down the list, virtually all of the concerns from the Jacobson 2009 paper are made irrelevant in a US context and yet they form the basis for excluding nuclear power in the 100% WWS USA paper.

I see this blog post is getting a bit long. I had planned on addressing the distressing way the 100% WWS USA paper deals with rare earth metals in this post, as well, but I think that should be the topic for a future blog post instead.

Friday, June 5, 2015

More on that "Toxic Benzene Plume"

Today’s blog post is intended to provide some further commentary on the “toxic benzene plume” from my previous blog post: Questions about the City of Vancouver May 27th Trans-Mountain Expansion Proposal Summary of Evidence. As readers of my blog know, the Trans Mountain Pipeline Expansion Proposal (TMEP) Summary of Evidence (SoE) presented to the Vancouver City Council on 27 May 2015 (ref) included the results of a modelling exercise which suggested that: 
a major oil spill from Kinder Morgan's Trans Mountain pipeline expansion project would expose up to 1 million Metro Vancouver residents to unsafe levels of toxic vapours, and as many as 31,000 could suffer "irreversible or other serious health effects,"(ref).
Needless to say this conclusion garnered a lot of headlines. I saw stories from the Georgia Strait to the Globe and Mail. The problem is that, as I described in my last post, this conclusion fails the smell test. The modelling exercise incorrectly compared the toxicological characteristics for benzene to a “pseudo-component surrogate” that was made up of a mixture in which benzene was a very minor constituent. This resulted in a wildly overstated risk to the public which, I will admit, made for some pretty nifty headlines. This post is intended as a follow-up to my previous post to explain the “surrogate thing” as well as to relate some surprising additional information I have uncovered since my last post.
As discussed, the biggest question from my last post was “what is the deal with the surrogate”? Well the chemical definition of a surrogate is:

a pure compound different from, but similar enough to, the analyte that, when added at a known concentration to the sample prior to processing, provides a measure of the overall efficiency of the method (recovery). Surrogates have chemical characteristics that are similar to that of the analyte and must provide an analytical response that is distinct from that of the analyte (ref).
While the Levelton Consultants Ltd  report (the Levelton report served as the basis for that portion of the SoE) uses the term “surrogate” in a chemical context they did not use the term under its chemical definition. Rather they used the non-technical definition of the term: “one that takes the place of another; a substitute” (ref). As I pointed out previously, modelling is hard and to simplify the modelling Levelton took the theoretical oil in the spill and broke it into 15 “pseudo-components” each of which was then assigned a surrogate for use in subsequent toxicological calculations. One “cut” of the dilbit was assigned the surrogate “benzene”. As I described previously, this resulted in badly skewed risk results because benzene is by far the most toxic component in the “cut” of dilbit for which it was used as a surrogate and appears in the dilbit in much lower concentrations than used in the model. As an analogy, imagine you were tasked with compiling a survey of the animal population of Vancouver. To simplify the survey you didn’t ask your surveyors to try to identify the dogs by species instead asking them to group the dogs by size. For a subsequent risk analysis you then assigned the pit bull as a “surrogate” to describe the behaviour of all dogs smaller than 2 feet tall identified in your survey. Would you then feel comfortable with the outcome of that risk analysis knowing that the analysis treated every Chihuahua it counted as if it were a pit bull for risk purposes? If someone subsequently warned you to stay off the street for fear of being attacked by "surrogate pit bulls", based on this analysis, would you stay off the street? Well that is what they did in this report with benzene.
As a follow-up to my last post I also did a bit of digging into the documents referred to in the Levelton Report. Specifically, I located the Intrinsik and Tetra Tech EBA reports (caution both are large files that take a while to download) used to rationalize the use and choice of surrogates in the modelling exercise. The TetraTech EBA report does indeed use “pseudo-components” as surrogates; however, in their analysis “benzene” is used as a “surrogate” only for the benzene component in an oil spill (confusingly it is thus used as a surrogate for itself only). As such instead of representing around 1% - 2% of the total spill mass (best guess on the number used by Levelton) it was determined to represent 0.088% of the spill mass (a fraction based on the Tetra Tech EBA analysis of the future pipeline composition). In the Intrinsik report, “benzene” is also restricted to the actual benzene component of the dilbit and for toxicological calculations only benzene is compared against the acute inhalation exposure limits for benzene. So when the Levelton report claims to follow an approach that “is consistent with the approach taken with the Human Health Risk Assessment (HHRA)” that consistency does not extend to how they approached the critical component described in the SoE and the one that garnered all the headlines: benzene.
The biggest surprise in my follow-up reading was to discover that this was not the first modelling exercise to examine benzene vapour concentrations derived from a theoretical oil spill in Vancouver Harbour. As I described in my previous post, what made my chemist’s antenna go haywire was the model output which said that in the case of a spill thousands of City of Vancouver residents would be exposed to benzene concentrations over 800 parts per million (ppm). As I pointed out, previous in situ studies (actual studies in the field) of oil spills of comparable API gravity crudes had measured benzene concentration ranging from 7 ppm to less than the detection limit (ref). A study of a lighter crude (with more volatile components than dilbit) (ref) and sampled from a mere 2.5 cm above the oil surface measured instantaneous benzene concentrations ranging from 80.4 ppm to 3.5 ppm. Finally, most everyone in the modelling community must have heard of the modelling study of the Exxon Valdez spill (ref). It, too, got a very similar result to the in situ experiments. Thus, when I read the Levelton report I was more than a bit surprised by the output from their model. Well imagine my surprise to discover that the Tetra Tech EBA report, used as a reference by the Levelton authors, actually included a modelling exercise almost identical to the one carried out by Levelton. The difference was that the Tetra Tech EBA modelling used benzene proportionate to its concentration in dilbit. Unsurprisingly, the resulting outputs were entirely consistent with the academic literature. The maximum 1-hour average ground concentration for benzene was less than 100 ppm over the small portion of Vancouver affected by the spill. Certainly not headline worthy, I will admit, but entirely consistent with the rest of the science out there. Nowhere in the Levelton report, which otherwise references the Tetra Tech EBA report, do they contrast their results to those generated by Tetra Tech EBA. It is almost as if they didn’t want anyone to know that the previous modelling exercise had been carried out and had generated such non-threatening (boring? non-headline worthy?) results.
In the academic community there is a simple rule: if a new study runs contrary to a body of research then it is incumbent on the authors of the study to explain the discrepancy. Sometimes the new study is a paradigm changer, but most of the time it represents an outlier of dubious use in decision-making. Unfortunately, the Levelton report does not explain why its results differ so dramatically from the scientific consensus. More troublingly, it does not even acknowledge the existence of the body of research out there, including an almost identical modelling study, that came to such startlingly differing conclusions. I’m sure the Vancouver City Council, and the local media, would be as interested as myself in finding out why such an outlier result was trumpeted on May 27th?