Panicking about climate change? See the rest of the story.

by Fabius Maximus 

Summary: Contrast alarmists’ “news” about the climate with the estimates produced by real scientists. This is the first step to breaking the public policy gridlock that prevents us from preparing for the inevitable repeat of past weather, let alone climate change.

Heating the world

 

What do liberals read about climate change in the daily links at Naked Capitalism? (It is useful resource, which I read every day.)

Oct 10 – Michael Mann: We Are Even Closer To Climate Disaster Than IPCC Predicts. Interview at Real News.

Oct 13 – Climate Change Sea Level Rise Will Be Worse Than Imagined at Rolling Stone.

Oct 14 – Why Catastrophic Climate Change is Probably Inevitable Now – “How Capitalism Torched the Planet by Imploding Into Fascism.” By umair haque at Eudaimonia and Co. “Well worth a read.”

Oct 15 – UN Says Climate Genocide Is Coming. It’s Actually Worse Than That. At New York Magazine.

Oct 16 – Dangerous Rapidly Intensifying Landfalling Hurricanes Like Michael and Harvey May Grow More Common at Weather Underground.

Oct 17 – Climate Change Might Double the Cost of a Beer at Wired.

Oct 17 – Mammals cannot evolve fast enough to escape current extinction crisis at ScienceDaily. ” 🙁 “

Oct 18 – Retired admiral says climate-change research may mean ‘our survival’ at Stars and Stripes.

Oct 18 – We’re just beginning to understand the toll climate change takes on mental health at Grist.

Most of these claims are debatable. Some are by over-the-top alarmists. None of these articles provide a balanced view of the current state of climate science. A steady diet of this propaganda produces a form of learned ignorance: a conviction that the world is doomed unless extreme measures are taken, and people who disagree are just stupid. {Just as conservative websites give equally one-sided and extreme views about debt, etc. This shows the common nature of Americans, the same across political tribes.}

To hear the rest of the story, turn to climate scientist Judith Curry. See climate science news that would shock the readers of NC and other liberal websites.

Climate uncertainty monster: What’s the worst case?

By Judith Curry at Climate Etc, 11 October 2018

Summary: On possibilities, known neglecteds, and the vicious positive feedback loop between scientific assessment and policy making that has created a climate Frankenstein.

I have prepared a new talk that I presented yesterday at Rand Corp. …My complete presentation can be downloaded [Rand uncertainty]. This post focuses on the new material.

Scientists are saying the 1.5 degree climate report pulled punches, downplaying real risks facing humanity in next few decades, including feedback loops that could cause ‘chaos’ beyond human control.

To my mind, if the scientists really wanted to communicate the risk from future climate change, they should at least articulate the worst possible case (heck, was anyone scared by that 4″ of extra sea level rise?). Emphasis on POSSIBLE. The possible worst case puts upper bounds on what could happen, based upon our current background knowledge. The exercise of trying to articulate the worst case illuminates many things about our understanding (or lack thereof) and the uncertainties. A side effect of such an exercise would be to lop of the ‘fat tails’ that economists/statisticians are so fond of manufacturing. And finally the worst case does have a role in policy making (but not as the expected case).

My recent paper “Climate uncertainty and risk” assessed the epistemic status of climate models, and described their role in generating possible future scenarios. I introduced the “possibilistic” approach to scenario generation, including the value of scientific speculation on policy-relevant aspects of plausible, high-impact scenarios, even though we can neither model them realistically nor provide a precise estimate of their probability.

How to evaluate whether a scenario is possible or impossible?

A series of papers by Gregor Betz {see his bio and publications} provide some insights, below is my take on how to approach this for future climate scenarios based upon my reading of Betz and other philosophers working on this problem. Click slides to enlarge.

Possibility Verification by Gregor Betz

I categorize climate models here as (un)verified possibilities, there is a debate in the philosophy of science literature on this topic. The argument is that some climate models may be regarded as producing verified possibilities for some variables (e.g. temperature).

How Much Confidence in Climate Models

Maybe I will accept that a few models produce useful temperature forecasts, provided that they also produce accurate ocean oscillations when initialized. But that is about as far as I would go towards claiming that climate model simulations are ‘verified’.

An interesting aside regarding the ‘tribes’ in the climate debate, in context of possibility verification:

  • Lukewarmers: focus on the verified possibilities
  • Consensus/IPCC types: focus on the unverified possibilities generated by climate models.
  • Alarmists: focus on impossible scenarios and/or borderline impossible as ‘expected’ scenarios, or worthy of justifying precautionary avoidance of emitting CO2.

This diagram provides a visual that distinguishes the various classes of possibilities, including the impossible and irrelevant. While verified possibilities have higher epistemic status than the unverified possibilities, all of these possibilities are potentially important for decision makers.

Classifying Possibilities

The orange triangle illustrates a specific vulnerability assessment, whereby only a fraction of the scenarios are relevant to the decision at hand, and the most relevant ones are unverified possibilities and even the impossible ones. Clarifying what is impossible versus what is not is important to decision makers, and the classification provides important information about uncertainty.

Let’s apply these ideas to interpreting the various estimates of equilibrium climate sensitivity {ECS}. The AR5 likely value is 1.5 to 4.5 C, which has has not budged since the 1979 Charney report. The most significant statement in the AR5, which is included in a footnote in the SPM: “No best estimate for equilibrium climate sensitivity can now be given because of lack of agreement on values across assessed lines of evidence and studies.”

The big disagreement is between the CMIP5 model range (values between 2.1 and 4.7 C) and the historical observations using an energy balance model. While Lewis and Curry (2015) was not included in the AR5, it provides the most objective comparison of this approach with the CMIP5 models since it used the same forcing and time period: “The implications for climate sensitivity of AR5 forcing and heat uptake estimates” in Climate Dynamics, August 2015 – gated, open copy here.

The Lewis/Curry estimates are arguably corroborated possibilities, since they are based directly on historical observational data, linked together by a simple energy balance model. It has been argued that LC underestimate values on the high end, and neglect the very slow feedbacks. True, but the same holds for the CMIP5 models, so this remains a valid comparison.

Equilibrium Climate Sensitivity - IPCC

Where to set the borderline impossible range? The IPCC AR5 put a 90% limit at 6 C. None of the ECS values cited in the AR5 extend much beyond 6 C, although in the AR4 many long tails were cited, apparently extending beyond 10 C. Hence in my diagram I put a range of 6-10 C as borderline impossible based on information from the AR4/AR5.

JC’s perspective.

We have an anchor on the lower bound — the no-feedback climate sensitivity, which is nominally ~1 C (sorry, skydragons). The latest Lewis/Curry values are reported here over the very likely range (5-95%). I regard this as our current best estimate of observationally based ECS values, and regard these as corroborated possibilities.

Equilibrium Climate Sensitivity - JC

I accept the possibility that Lewis/Curry is too low on the upper range, and agree that it could be as high as 3.5C. And I’ll even bow to peer/consensus pressure and put an upper limit of the v likely range as 4.5 C. I think values of 6-10 C are impossible, and I would personally define the borderline impossible region as 4.5 – 6 C. Yes we can disagree on this one, and I would like to see lots more consideration of this upper bound issue. But the defenders of the high ECS values are more focused on trying to convince that ECS can’t be below 2 C.

But can we shake hands and agree that values above 10C are impossible?

Now consider the perspective of economists on equilibrium climate sensitivity. The IPCC AR5 WGIII report based all of its calculations on the assumption that ECS = 3 C, based on the IPCC AR4 WGI Report. Seems like the AR5 WGI folks forgot to give WGIII the memo that there was no longer a preferred ECS value.

We are primarily funded by readers. Please subscribe and donate to support us!

Subsequent to the AR5 Report, economists became more sophisticated and began using the ensemble of CMIP5 simulations. One problem is that the CMIP5 models don’t cover the bottom 30% of the IPCC AR5 likely range for ECS.

Equilibrium Climate Sensitivity - Economists

The situation didn’t get really bad until economists start creating PDFs {probability density function} of ECS. Based on the AR4 assessment, the US Interagency Working Group on the Social Cost of Carbon fitted a distribution that had 5% of the values greater than 7.16 C. Weitzmann (2008) fitted a distribution 0.05% > 11C, and 0.01% >20C. While these probabilities seem small, they happen to dominate the calculation of the social cost of carbon (low probability, high impact events). [see Worst case scenario versus fat tail]. These large values of ECS (nominally beyond 6C and certainly beyond 10 C) are arguably impossible based upon our background knowledge.

For equilibrium climate sensitivity, we have no basis for developing a PDF – no mean, and a weakly defended upper bound. Statistically-manufactured ‘fat tails’, with arguably impossible values of climate sensitivity are driving the social cost of carbon. Instead, effort should be focused on identifying the possible or plausible worst case, that can’t be falsified based on our background knowledge. [see also Climate sensitivity: lopping off the fat tail]

About the rising seas.

The issue of sea level rise provides a good illustration of how to assess the various scenarios and the challenges of identifying the possible worst case scenario. This slide summarizes expert assessments from the IPCC AR4 (2007), IPCC AR5 (2013), the US Climate Science Special Report (CSSR 2017), and the NOAA Sea Level Rise Scenarios Report (2017). Also included is a range of worst case estimates (from sea level rise acceleration or not).

Global Sea Level Rise 2100 - experts

With all these expert assessments, the issue becomes ‘which experts?’ We have the international and national assessments, with a limited number of experts for each that were selected by whatever mechanism. Then we have expert testimony from individual witnesses that were selected by politicians or lawyers having an agenda.

In this context, the expert elicitation reported by Horton et al. (2014) is significant, which considered expert judgement from 90 scientists publishing on the topic of sea level rise: “Expert assessment of sea-level rise by AD 2100 and AD 2300” in Quaternary Science Reviews, 15 January 2014 (gated, open copy here).

Also, a warming of 4.5 C is arguably the worst case for 21st century temperature increase (actually I suspect this is an impossible amount of warming for the 21st century, but lets keep it for the sake of argument here). So should we regard Horton’s ‘likely’ SLR of 0.7 to 1.2 m for 4.5 C warming as the the ‘likely’ worst case scenario? The Horton paper gives 0.5 to 1.5 as the very likely range (5 to 95%). These values are much lower than the range 1.6 to 3 m (and don’t even overlap).

There is obviously some fuzziness and different ways of thinking about the worst case scenario for SLR by 2100. Different perspectives are good, but 0.7 to 3 m is a heck of a range for the borderline worst case.

JC’s perspective on sea level rise circa 2100.

The corroborated possibilities, from rates of sea level rise in the historical record, are 0.3 m and less.

The values from the IPCC AR4, which were widely criticized for NOT including glacier dynamics, are actually verified possibilities (contingent on a specified temperature change) – focused on what we know, based on straightforward theoretical considerations (e.g. thermal expansion) and processes for which we have defensible empirical relations.

Once you start including ice dynamics and the potential collapse of ice sheets, we are in the land of unverified possibilities.

Global Sea Level Rise 2100 - JC

I regard anything beyond 3 m as impossible, with the territory between 1.6 m and 3.0 m as the disputed borderline impossible region. I would like to see another expert elicitation study along the lines of Horton that focused on the worst case scenario. I would also like to see more analysis of the different types of reasoning that are used in creation of a worst case scenario.

The worst case scenario for sea level rise is having very tangible applications NOW in adaptation planning, siting of power plants, and in lawsuits. This is a hot and timely topic, not to mention important. A key topic in the discussion at Rand was how decision makers perceive and use ‘worst case’ scenario information. One challenge is to avoid having the worst case become anchored as the ‘expected’ case.

Framing the issue of 21st century climate change and sea level rise.

I don’t think Donald Rumsfeld, in his famous unknown taxonomy, included the category of ‘unknown knowns’. Unknown knowns, sometimes referred to as ‘known neglecteds,’ refer to known processes or effects that are neglected for some reason.

Climate science has made a massive framing error, in terms of framing future climate change as being solely driven by CO2 emissions. The known neglecteds listed below are colored blue for an expected cooling effect over the 21st century, and red for an expected warming effect.

Framing Error

Much effort has been expended in imagining future black swan events associated with human caused climate change. At this point, human caused climate change and its dire possible impacts are so ubiquitous in the literature and public discussion that I now regard human-caused climate change as a ‘white swan.’ The white swan is frankly a bit of a ‘rubber ducky’, but nevertheless so many alarming scenarios have been tossed out there, that it is pretty unimaginable that a climate surprise caused by CO2 emissions that has not been imagined.

The black swans related to climate change are associated with natural climate variability. There is much room for the unexpected to occur, especially for the ‘CO2 as climate control knob’ crowd.

Black versus White Swans

Existing climate models do not allow exploration of all possibilities that are compatible with our knowledge of the basic way the climate system actually behaves. Some of these unexplored possibilities may turn out to be real ones.

Scientific speculation on plausible, high-impact scenarios is needed, particularly including the known neglecteds.

Alternative Scenario Generation Methods

Is all this categorization of uncertainty merely academic, the equivalent of angels dancing on the end of a pin? The level of uncertainty, and the relevant physical processes (controllable or uncontrollable) are key elements in selecting the appropriate decision-analytic framework.

Controllability of the climate (the CO2 control knob) is something that has been been implicitly assumed in all this. Perhaps on millennial time scales climate is controlled by CO2 (but on those time scales CO2 is a feedback as well as a forcing). On the time scale of the 21st century anything feasible that we do to reduce CO2 emissions is unlikely to have much of an impact on the climate even if you believe the climate model simulations (see Lomborg)

Optimal control and cost/benefit analysis, which are used in evaluating the social cost of carbon, assume statistical uncertainty and that the climate is controllable — two seriously unsupported assumptions.

Scenario planning, adaptive management and robustness/resilience/antifragility strategies are much better suited to conditions of scenario/deep uncertainty and a climate that is uncontrollable.

Decision - anslytic frameworks

How did we land in this situation of such a serious science-policy mismatch? Well, in the early days (late 1980s – early 1990’s) international policy makers put the policy cart before the scientific cart, with a focus on CO2 and dangerous climate change. This focus led climate scientists to make a serious framing error, by focusing only on CO2-driven climate change. In a drive to remain relevant to the policy process, the scientists focused on building consensus and reducing uncertainties. The also began providing probabilities – even though these were unjustified by the scientific knowledge base, there was a perception that policy makers wanted this. And this led to fat tails and cost benefit analyses that are all but meaningless (no matter who they give Nobel prizes to).

Oversimplification - Alarmism

The end result is oversimplification of both the science and policies, with positive feedback between the two that has created a climate alarm monster.

This Frankenstein has been created from framing errors, characterization of deep uncertainty with probabilities, and the statistical manufacture of fat tails.

Creation of a climate Frankenstein

“Monster creation” triggered a memory of a post I wrote in 2010: Heresy and the Creation of Monsters. Yikes I was feisty back then (getting mellow in my old age).

———————————————————–

The bottom line: my conclusions from Dr. Curry’s analysis

“We don’t even plan for the past.”
— Steven Mosher (of Berkeley Earth), a comment posted at Climate Etc.

The public policy gridlock about climate change has left us unprepared for the inevitable repeat of past weather. This year the people of Puerto Rico paid for our folly. Next time the cost in blood and money might be even larger. Hysteria produced by three decades of propaganda – one-sided flows of extremely biased information about climate change – has helped produce this gridlock.

Judith Curry

About Judith Curry

Judith Curry recently retired as a Professor of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. She is now President and co-owner of Climate Forecast Applications Network(CFAN). Prior to joining the faculty at Georgia Tech, she served on the faculties of the University of Colorado, Penn State University and Purdue University.

She has served on the NASA Advisory Council Earth Science Subcommittee, the DOE Biological and Environmental Science Advisory Committee, the National Academies Climate Research Committee, and Space Studies Board, and the NOAA Climate Working Group.

She is a Fellow of the American Meteorological Society, the American Association for the Advancement of Science, and the American Geophysical Union. Her views on climate change are best summarized by her Congressional testimony: Policy Relevant Climate Issues in Context, April 2013.

Follow Dr. Curry on Twitter at @curryja. Learn about her firm, CFAN, at their website.

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.