The biggest news about climate change (not from the IPCC)

by Fabius Maximus 

Summary: This month might be an inflection point in the three decades of struggle about the best policy to deal with climate change. Not the new IPCC special report, which has little new in it. A new paper questions the global temperature history records, the foundation of the debate.

“But facts are chiels that winna ding, and downa be disputed.
— “A Dream” by Robert Burns (1786).

 Audit of the HadCRUT4 Global Temperature Dataset

 

An Audit of the Creation and Content of the HadCRUT4 Temperature Dataset

By John D. McLean, October 2018.

Preface

“This report is based on a thesis for my PhD, which was awarded in December 2017 by James Cook University, Townsville, Australia. …The thesis was examined by experts external to the university, revised in accordance with their comments and then accepted by the university. This process was at least equivalent to ‘peer review’ as conducted by scientific journals.”

His thesis is “An audit of uncertainties in the HadCRUT4 temperature anomaly dataset plus the investigation of three other contemporary climate issues“, submitted for Ph.D. in physics from James Cook University (2017). The HadCRUT dataset is a collaboration of the Met Office’s Hadley Centre and the Climatic Research Unit at the University of East Anglia.

Here are two excerpts from Dr. McLean’s report.

From the Introduction.

… The key temperature data used by the Intergovernmental Panel on Climate Change (IPCC) is the HadCRUT dataset, now in its fourth version and known as HadCRUT4. When I was an Expert Reviewer of the IPCC’s 2013 Climate Assessment report I raised questions as to whether the HadCRUT4 dataset and the associated HadSST3 dataset had been audited. The response both times was that it hadn’t.

Further indication that no-one has independently audited the HadCRUT4 dataset came early in my analysis, when I found that certain associated files published simultaneously with the main dataset contained obvious errors. Given the nature of the errors and the years in which some of the errors occurred, it seemed that they probably existed for at least five years. (At the time I notified the relevant people and the files have since been corrected.) It seems very strange that man-made warming has been a major international issue for more than 30 years and yet the fundamental data has never been closely examined. …

HadCRUT4 average temperature anomaly

Almost all of the published papers about the HadCRUT4 dataset and its two associated datasets were written by people involved in the construction and maintenance of them, which hardly makes for unbiased analysis. …

Some issues in this study focus on individual situations, such as a single observation station, that would have negligible impact on global average values.  Similar issues could exist elsewhere in the data and processing, perhaps less obviously, and the fact that issues can be identified at all suggests a variety of problems including lack of attention to detail and possible problems with fundamental procedures or processing. Above all they show that considerable uncertainty exists about the accuracy of the HadCRUT4.

The PhD candidature on which this work is based was funded on the normal “per candidate” basis by the Australian government and had no additional funding.  The creation of this report itself had no funding whatsoever.

From the Executive Summary.

…As far as can be ascertained, this is the first audit of the HadCRUT4 dataset, the main temperature dataset used in climate assessment reports from the Intergovernmental Panel on Climate Change (IPCC). Governments and the United Nations Framework Convention on Climate Change (UNFCCC) rely heavily on the IPCC reports so ultimately the temperature data needs to be accurate and reliable.

This audit shows that it is neither of those things. More than 70 issues are identified, covering the entire process from the measurement of temperatures to the dataset’s creation, to data derived from it (such as averages) and to its eventual publication. The findings (shown in consolidated form Appendix 6) even include simple issues of obviously erroneous data, glossed-over sparsity of data, significant but questionable assumptions and temperature data that has been incorrectly adjusted in a way that exaggerates warming.

It finds, for example, an observation station reporting average monthly temperatures above 80°C, two instances of a station in the Caribbean reporting December average temperatures of 0°C and a Romanian station reporting a September average temperature of -45°C when the typical average in that month is 10°C. On top of that, some ships that measured sea temperatures reported their locations as more than 80km inland.

It appears that the suppliers of the land and sea temperature data failed to check for basic errors and the people who create the HadCRUT dataset didn’t find them and raise questions either.

The processing that creates the dataset does remove some errors but it uses a threshold set from two values calculated from part of the data but errors weren’t removed from that part before the two values were calculated.

Data sparsity is a real problem. The dataset starts in 1850 but for just over two years at the start of the record the only land-based data for the entire Southern Hemisphere came from a single observation station in Indonesia. At the end of five years just three stations reported data in that hemisphere. Global averages are calculated from the averages for each of the two hemispheres, so these few stations have a large influence on what’s supposedly “global”

We are primarily funded by readers. Please subscribe and donate to support us!

Related to the amount of data is the percentage of the world (or hemisphere) that the data covers. According to the method of calculating coverage for the dataset, 50% global coverage wasn’t reached until 1906 and 50% of the Southern Hemisphere wasn’t reached until about 1950

In May 1861 global coverage was a mere 12% – that’s less than one-eighth. In much of the 1860s and 1870s most of the supposedly global coverage was from Europe and its trade sea routes and ports, covering only about 13% of the Earth’s surface. To calculate averages from this data and refer to them as “global averages” is stretching credulity.

Another important finding of this audit is that many temperatures have been incorrectly adjusted. {Technical explanation follows.} …

The overall conclusion (see chapter 10) is that the data is not fit for global studies. Data prior to 1950 suffers from poor coverage and very likely multiple incorrect adjustments of station data. Data since that year has better coverage but still has the problem of data adjustments and a host of other issues mentioned in the audit.

Calculating the correct temperatures would require a huge amount of detailed data, time and effort, which is beyond the scope of this audit and perhaps even impossible. The primary conclusion of the audit is however that the dataset shows exaggerated warming and that global averages are far less certain than have been claimed.

One implication of the audit is that climate models have been tuned to match incorrect data, which would render incorrect their predictions of future temperatures and estimates of the human influence of temperatures.

Another implication is that the proposal that the Paris Climate Agreement adopt 1850-1899 averages as ‘indicative’ of pre-industrial temperatures is fatally flawed. During that period global coverage is low – it averages 30% across that time – and many land-based temperatures are very likely to be excessively adjusted and therefore incorrect. …

Ultimately it is the opinion of this author that the HadCRUT4 data, and any reports or claims based on it, do not form a credible basis for government policy on climate ….

———————————————-

You can buy a copy of this report for $8 US. I recommend reading it!

Hat tip to WUWT and JoNova.

My comments

“It is a capital mistake to theorize before you have all the evidence. It biases the judgment.”
— Sherlock Holmes in A Study in Scarlet, by Arthur Conan Doyle (1887).

This report makes an astonishing claim: that one of the principle global temperature datasets has poor quality control. It seems negligent, given the importance of this data. I hope Dr. McLean publishes this in a peer-reviewed journal, giving a higher level of review than for a dissertation. Also, I look forward to experts on the climate data records critiquing his report.

Ten years ago there was considerable discussion about the poor quality control in the surface temperature record. I wrote about it, and made this my top recommendation: “More funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.” The cost would be pocket lint compared to our total spending on climate science, and well worth it. Ross McKitrick gave a powerful call for action.

“I have often used the analogy of national Consumer Price Indexes to illustrate the ridiculous situation of the “Global Temperature” data. Each country has large professional staffs at their Stat agencies working on the monthly CPI using international protocols, using transparent methods, with independent academics looking over their shoulders weighing the various aggregation methodologies …and with historical archiving rules that allow backward revisions periodically if needed. It’s by no means perfect, but it’s a far cry from the f**king gong show we’re seeing here. …

“By contrast the Global Temperature numbers are coming from a bunch of disorganized academics chipping away at it periodically in their spare time. GISS numbers are handled (on Gavin’s admission) by a single half-time staffer, and the CRU says they’re stumped trying to find their original files back into the 70s and 80s, as well as the agreements under which they obtained the data and which to this day they invoke to prevent independent scrutiny.”

Assurances were given that the temperature datasets were reliable. Skeptics were mocked. But this new paper suggests that the problem was worse than most of us thought, and little or nothing has been done to improve what might be the most important data record used today. The other major global temperature datasets might be better, but I doubt it.

Nothing will change without public pressure. Push Congress to fund a full review and – if necessary, an update of these systems. Push Team Trump to implement these measures immediately.

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.