The NOAA’s National Climate Data Center recently announced that the last 12 months were the warmest on record in the “contiguous” U.S., extending the 2011-12 hot streak that has now eclipsed the previous record in 1999-2000 by a half degree Fahrenheit. Apparently, that was just too much for the Heartland Institute’s James Taylor who used his regular column in Forbes magazine to accuse the NOAA of “doctoring real-world temperature data”. According to Taylor, the “alarmists” at NOAA “simply erase the actual readings and substitute their own desired readings in their place”.
But it turns out that Taylor’s source is none other than hapless climate blogger Steven Goddard, who recently leveled incoherent and unsupported false accusations against James Hansen and NASA’s Gistemp record, as well as NOAA. Goddard also relies on the same reviled NOAA data in his botched attempt to buttress his case that NASA is “hiding” an 80 year cooling trend. Never mind that the U.S. “lower 48″ represents less than 2% of the Earth’s surface area in any event, or that past attempts to show U.S. cooling have been proven utterly wrong.
If Forbes has a shred of integrity, this sorry episode will surely result in an abject retraction and apology to NOAA, along with the banishment of Heartland from the magazine’s pages. And it’s also high time reputable commentators in the mainstream media called out the irresponsible behaviour of Forbes and other right-wing media.
James Taylor’s rant is a swift descent into weirdness, paragraph by paragraph, until arriving at his central thesis:
The bureaucracy at NOAA and NASA who report the U.S. temperature data undertake what they term “correcting” the raw data. These corrections are not just one-time affairs, either. As time goes by, older temperature readings are systematically and repeatedly made cooler, and then cooler still, and then cooler still, while more recent temperature readings are made warmer, and then warmer still, and then warmer still.
In covering the latest Forbes debacle, Jocelyn Feng of Media Matters gives a good summary of the U.S. Historical Climatology Network (USHCN), the 1200 station climate data network used by NOAA in to track climate across the U.S. She also contacted NOAA scientist David Easterly, who mildly observed that the “conclusions of the column sound like pure speculation on the part of the writer.” I’ll say.
In fact, the corrections applied to the station records of the USHCN are both reasonable and necessary to correct errors introduced in the original observation process, as explained in the above USHCN link, and summarized in a 2007 NOAA bulletin describing version 2 of the USHCN.
These artificial changes include station relocations, different instrumentation, and changes in the landscape surrounding the station (e.g. urbanization, removal or planting of vegetation, etc.). Some of these changes may result in “random” changes to the data. For example, even small station relocations can result in temperature readings that are either slightly cooler or slightly warmer than what would have occurred at the former site. Other changes, such changes in urbanization in the vicinity of the station or changes in observing times, can systematically affect temperatures, e.g., add an urban warming bias to the temperature trends. Research has shown that the data from these kinds of changes can be corrected to a large degree based on physical and statistical methods (e.g., see Peterson 2006).
The bulletin goes on to explain that improved correction schemes for station moves and urbanization had been recently introduced as part of version 2 of the USHCN. This gives an excellent opportunity to see the NOAA correction process in action at that time. The following chart compares the NOAA annual US temperature record before (version 1) and after (version 2) these changes.Sure, some of the recent years have become ever so slightly warmer, but so have many of the years between 1910 and 1940. In fact, the impact on the actual long term trend was minimal.
These small differences in average temperatures result in minor differences in annual rankings for some years. The new correction scheme has virtually no impact on the long-term temperature trend as annual temperature trends in Version 1 from 1895-2006 were 0.112°F/decade and in Version 2 the trends were 0.110°F/decade.
Got that? The change in long term trend was infinitesimal. But it actually decreased (if only by 0.002°F)!
This example also makes clear that corrections are not systematically applied to the whole data set, as claimed by Taylor; rather, the correction procedures, which have all been researched thoroughly and published in the scientific literature, are applied to new data as it arrives, with apparently very negligible “ripple” effects back in time. After all, even when the whole data set was updated in 2007 with the advent of version 2 of the USHCN, the effect was negligible. So much for the supposed adjustment of the record “systematically and repeatedly” to show warming.
Still, logic and facts count for little in the Heartland alternate universe. And if I’ve learned anything in my time blogging about climate science, it’s always revealing (and sometimes entertaining) to chase down the actual sources of tinfoil hattery. This time proves to be no exception. Here’s Taylor again:
Science blogger Steven Goddard at Real Science has posted temperature comparison charts (available here, and here) showing just how dramatically the NOAA and NASA bureaucrats have doctored the U.S. temperature data during the past several decades.
Ah, Steven Goddard – well, that explains a lot, although his sudden elevation to a Heartland go-to source is a bit mysterious, given his only previous Heartland connection was the short 2008 “analysis”, A Tale of Two Thermometers. There Goddard finds cooling in the HadCrut temperature record, among other hilarious claims. Goddard’s parting shot discusses the NASA Gistemp temperature record.
The data has been systematically adjusted upwards in recent years – as can be seen in this graph, reproduced below. Temperatures from the years 1990 to present have more than one-half degree Fahrenheit artificially added on to them – which may account for most of the upwards trend in the NASA temperature set.
Here Goddard is referring to the USHCN corrections discussed above (and to which we shall return). There is no explanation as to how these adjustments to U.S. station data, comprising only 1.6% of the Earth’s surface, could possibly account for “most” of the global trend. Apparently, that has been left as an exercise for the reader.
But Tale is a PhD thesis compared to Goddard’s recent outings. The first link above points to a Goddard post exposing some supposed “corruption” of the NASA Gistemp temperature record by “Hansen et al”. Goddard posts a blinking graph purporting to show substantial post-hoc changes in the 20th century U.S. temperature data. But there are no explanations of which data sets are being compared, so there is no way to even tell if they are equivalent, or represent different stages of processing. And I suspect this might also involve differences introduced after a mismatch in Gistemp’s U.S. data (originally identified by Steve McIntyre a few years ago) was corrected. If so, Goddard has failed to also show the changes in years after 2000 – all of which moved down relative to the preceding decade.
The full text of the post, interspersed between three graphs reads:
So what about since 1999? Temperatures have also cooled in the 12 years since Hansen corrupted the US temperature record. …
[shows NOAA trend from 1999-2011]
Bottom line is that the US has been cooling for 80 years, and Hansen et al have completely corrupted the data set.
Lack of evidence aside, there are (at least) two problems with this.
- The NOAA U.S. short term linear “trend” was indeed down from 1999 to 2011, as seen in NOAA’s interactive climate summary tool. But the long term trend actually increased after 1999; NOAA has 1930-1999 at only 0.02 F per decade, while the 1930-2011 trend stood at 0.13 F per decade. The 1999-2011 period is simply too short to meaningfully assess temperature trend.
- If NOAA “bureaucrats” are “systematically and repeatedly” bumping up recent temperatures, how did they ever allow that short term downward trend?
Now let’s move on to Taylor’s final “evidence” which invokes the Urban Heat Island and brings us back full circle to the USHCN network and its set of corrections.
Ironically, the government overseers of raw temperature data are doing just the opposite. As Goddard shows here, they are doctoring older temperature readings (when urban heat island effects were minimal) in a manner that makes the older temperature readings seem colder than was reported in the real-world data. At the same time, they are doctoring more recent temperature readings (when urban heat islands are more pronounced) in a manner that makes the more recent temperature readings seem warmer than the real-world data report.
It’s a good thing Goddard has Taylor to interpret his nonsense for him. In the linked post entitled Government Scientists Add Half A Degree, Then Claim That Temperatures Are Above Average, Goddard quotes a CNN report about the smashing of the U.S. temperature record, and then has this (and only this) to say.
Here in the US, these same good people at NOAA have been adding half a degree on to all temperatures for the last 326 months. Then they tell us that we need to have our taxes raised, because their inflated temperatures have been above average for the last 326 months.
As evidence, Goddard presents a single chart from NOAA itself from more than ten years ago, adding a highlight for the last 25 years (with the last, blank decade left as another exercise for the reader).
Not only are NOAA scientists “doctoring” the “real” temperature record, but they are brazenly broadcasting the fact on their own website!
It turns out that this chart comes from the USHCN description I pointed to at the top. But now we’re ready to take another look. As mentioned before, a series of corrections is applied to the USHCN raw data. In order these are (following NOAA’s more elaborate and referenced summary):
- QC: Quality control (removal of “suspects” and “outliers”).
- TOB: Time of observation adjustment to adjust for differing and changing observation schedules.
- MMTS: Correction for bias introduced by changeover from regular thermometers to the Maximum/Minimum Temperature System at certain stations.
- SHAP: Station history adjustment to achieve homogeneity across station moves.
- FILNET: Adjustments to infill missing data as well further adjustments for overly frequent station moves.
- FINAL (UHI): Final adjustment for urban warming using a regression approach.
Taken together, these adjustments result in the overall effect seen in the above chart. So, as I’ve already shown above, the Goddard-Taylor argument really boils down to an unsubstantiated objection to the correction algorithms themselves. NOAA supplies the following chart that shows the effect of each adjustment step.
If one is interested in trends since 1970, clearly the TOB adjustment is having the largest effect, going from 0 to 0.3 deg F between 1970 and 1990. This adjustment accounts for changes in observation times, which apparently have gone largely from late afternoon to morning at many stations, leading to a cooling bias if left uncorrected.
If Goddard or Taylor wants to argue that this adjustment is too large, surely one would expect some evidence or at least mention of the issue. But as far as I can tell Goddard has never even mentioned this and other USHCN corrections, let alone discuss or analyse them, even to the limited extent I have above. Meanwhile, NOAA provides excellent short explanations, with pointers to all the papers for would-be auditors to knock themselves out.
For example, why did net station history adjustments go up in the second half of the twentieth century?
Application of the Station History Adjustment Procedure (yellow line) resulted in an average increase in US temperatures, especially from 1950 to 1980. During this time, many sites were relocated from city locations to airports and from roof tops to grassy areas. This often resulted in cooler readings than were observed at the previous sites. When adjustments were applied to correct for these artificial changes, average US temperature anomalies were cooler in the first half of the 20th century and effectively warmed throughout the later half.
As for urban warming adjustments, these have also been incorporated. But the magnitude has only reached about 0.1 deg F. Again, though, this is in line with others’ research on the topic, including the contrarians’ last, BEST hope.
[Update, June 22: In USHCN version 2, urbanization effects are accounted as “local” trend changes in the “change-point” analysis step for each station, along with the station history changes. The Reno, Nevada record gives a nice example, with both a step change correction (move to the airport) and an apparent UHI downward correction. However, as already mentioned, the overall impact of the change to version 2 on the temperature trend was negligible. ]
One final observation on the various adjustments: All the curves are fairly flat through most of the 90s as they approach 2000 (the first year not shown). This suggests that the adjustments may well have stabilized so that the average difference between adjusted and raw temperatures has remained steady or even declined in recent years. But Taylor’s accusation implies that this difference should be ever growing as recent temperatures are “systematically and repeatedly” adjusted upward.
In summary, Goddard’s wild and uninformed misinterpretation of NOAA’s own documentation notwithstanding, there is absolutely no evidence whatsoever for these accusations of “doctoring” temperature data.
It would be easy to laugh it all off, were it not for the accessibility to a mainstream media platform that Forbes has provided for this outrageous conspiracy theory. Fortunately, few others have picked up on it so far.
However, it should also not be forgotten that Joe Barton’s harassment of climate scientists was inspired by a Wall Street Journal article that greatly exaggerated the importance of McIntyre and McKitrick’s 2005 critique of the MBH “hockey stick”. That’s why, as far as I’m concerned, the issue is not just Heartland or other like minded think tanks. Even more disturbing is the continuing complaisance of supposed mainstream media like Forbes and Fox News that continue to peddle defamatory propaganda.
Meanwhile more responsible media still buy into a false balance between climate scientists and contrarians, or else simply look away.
[Update, June 21: As I noted in a comment below, many of the relevant papers can be found in USHCN version 2 repository (along with raw, TOB-adjusted and fully adjusted USHCN data sets).]
[Update, June 22: The UAH satellite “USA48″ trend agrees more with NOAA’s USHCN record, contradicting the naysayers’ claims of flat or even cooling temperatures in the contiguous U.S.
- 1979-2011 USHCN-NOAA: 0.25 C/decade (0.45 F/decade)
- 1979-2011 Usa48-UAH LT: 0.20 C/decade (0.36 F/decade)
It should also be pointed out that the satellite record trend tends to be more volatile and reflects ENSO swings more strongly. So even that relatively small discrepancy is likely to narrow or even disappear during the next El Nino event. Already the USA48 trend has risen to 0.24 C/decade five months into 2012.
Ironically, it was a comment by “Paul S “ on Roy Spencer’s proposed U.S. Population Density Adjusted Temperature Dataset (PDAT), that led me to this comparison (see also Tamino’s Roy Spencer, Man of Mystery post). PDAT has a large downward UHI adjustment leading to an essentially flat trend, which is contradicted by UAH Usa48, Hadcrut and USHCN. But that contradiction with the data set Spencer himself founded didn’t deter him from launching a critique of USHCN a week later. And – surprise, surprise! – it turns out James Taylor used Spencer to level even more far-fetched accusations of NOAA emperature data “doctoring” back in April.
However, that’s a story for another time. ]