Wegman Report update, part 1: More dubious scholarship in full colour

This is the final instalment in a series of posts documenting dubious scholarship and unattributed sources in the background chapter of the touchstone of climate contrarians known as the Wegman Report. That report has been touted as Exhibit A proving the “destruction” of Michael Mann’s “hockey stick” graph by self-styled climate auditor Steve McIntyre.

Previously, I found extensive passages bearing “striking similarity” to a classic text by the distinguished paleoclimatologist (and “hockey stick” co-author) Raymond Bradley in the background sections on tree rings and on ice cores. Subsequently, the background section on social networks was found to contain material apparently drawn without attribution from a variety of sources, including Wikipedia and several text books.

This time, I’m looking at section 2.2 (see Wegman Report PDF at p. 15), which gives the background of key statistical concepts, including Principal Component Analysis. Astonishingly, even this section appears to contain a significant amount of unattributed material from other sources, although quite a bit less than the other sections. Again, Wikipedia appears to be a key source, along with a couple of text books.

I’ll also introduce some refinements to the text analysis, based largely on John Mashey’s recent innovations. Those refinements allow a better characterization of the relationship between various passages in Wegman et al and their apparent antecedents, as well as permitting a quantitative analysis based on word counts.

Despite my relative success in adducing the antecedents for other background sections, for some time I avoided serious sustained sleuthing on section 2.2, which describes Principal Component Analysis and time series noise models. Surely this background section, at least, was well within the authors’ ambit of expertise and there would be no need to borrow liberally without attribution.

Still, like much of the report, there were no citations at all. So eventually, I did make the attempt, but my initial efforts yielded only one passage of “striking similarity” – the “colors of noise” passage also found in Wikipedia, as discussed in this comment back in April.

Recently, though, inspired by John Mashey’s research into other parts of the report, I tried again, this time searching for smaller blocks of text. In the end, no fewer than nine possible unattributed sources have been identified (see the list at the end of the full textual side-by-side comparison with identified possible antecedents). In general, there were even more slight changes and rearrangements of the various sources than seen in previously analyzed sections, making detection of those possible antecedents more difficult.

In fact, to evaluate just how “strikingly similar” some of the newly discovered passages were, refinement of the analysis techniques became necessary. Borrowing Mashey’s concept of “longest common sub-sequence”, I highlighted exactly identical text in cyan in both source and target text, while taking care to separate the blocks where the text was rearranged or separated by changed text.

Next, trivial changes were highlighted with yellow. These are slight changes of tense, number or voice (i.e. active to passive), as well as substitution of synonyms or similar sounding words. Finally, changes or additions that introduced issues were underlined; these issues might be a simple trivial error, a change in meaning or even the introduction of distortion or bias.

Now let’s look at some examples (each of these can be found in the side-by-side comparison or Cmp for short, on the page noted). I’ll start with the above-mentioned “colors of noise” passage.

Here is the first sentence from the Wikipedia article Colors of Noise (from April 12, 2006):

There are many forms of noise with various frequency characteristics that are classified bycolor”.

The corresponding sentence in  Wegman et al (at p. 15, Cmp p. 2) is:

There are many types of noise with varying frequencies each classified by a color.

The errors introduced by the changes are perhaps not serious, but they do bespeak a possible lack of understanding by the responsible author. (“Varying frequencies” implies that each type of noise would have a single dominant distinguishing frequency, while the change from color in quotes to “a color” obfuscates the conceptual nature of the classification).

The changes in the next two sentences are mainly removal of text, shown with strikeout (so we’ll show the original Wikipedia version only):

The color names for these different types of sounds are derived from an analogy between the spectrum of frequencies of sound wave present in the sound (as shown in the blue diagrams) and the equivalent spectrum of light wave frequencies. That is, if the sound wave pattern of blue noise were translated into light waves, the resulting light would be blue, and so on

Clearly, the above three sentences in Wegman et al, taken together, have a very convincing and striking similarity with the Wikipedia passage. Even the reference to “sounds” has been left as is, instead of the obvious change to a more general term , such as “signal”. All the same, the identical text has been broken up into no less than eight separate sub-blocks.

Things get even more interesting in our next example (Wegman p. 17, Cmp p. 4). Here is the passage from Wegman et al. describing “long memory” processes:

Random (or stochastic) processes whose autocorrelation function, decaying as a power law, sums to infinity are known as long range correlations or long range dependent processes. Because the decay is slow, as opposed to exponential decay, these processes are said to have long memory. Applications exhibiting long-range dependence include Ethernet traffic, financial time series, geophysical time series such as variation in temperature, and amplitude and frequency variation in EEG signals.

The apparent (but unattributed) antecedent is from the introduction to Processes with long-range correlations: theory and applications, edited by Govindan Rangarajan and Mingzhou Ding):

Processes with long range correlations (also called long range dependent processes) occur ubiquitously in nature. They are defined as random stochastic processes whose autocorrelation function, decaying as a power law in the lag variable for large lag values, sums to infinity. Because of this slow decay (as opposed to an exponential decay), these processes are also said to have long memory. … A partial list of problems involving long range dependence include: Anomalous diffusion, potential energy fluctuations in small atomic clusters, Ethernet traffic, geophysical time series such as variation in temperature and rainfall records, financial time series, electronic device noises in field effect and bipolar transistors, and amplitude and frequency variation in music, EEG signals etc. …

The similarity is obvious from the sheer amount of highlighted text. But the degree of rearrangement of text is staggering; so much so, that one assumes that the passage may have been edited a few times. This short passage contains no fewer than 18 rearranged separate blocks of identical text, some consisting of only a single word of two. For example, “Because of this slow decay” becomes “Because the decay is slow”, with all three common words – “because”, “decay” and “slow” – rearranged and separated by trivially changed words.

As before, a couple of surprising errors have been introduced. For one thing, the processes discussed are not themselves “long range correlations”; rather they are processes with long range correlations.

The next example (a Wikipedia article on Self-similarity) is included more for amusement than anything else.

A self-similar object is exactly or approximately similar to a part of itself.. … Many objects in the real world, such as coastlines , are statistically self-similar: parts of them show the same statistical properties at many scales. Self-similarity is a typical property of fractals.

Wegman et al’s version is very similar, but not quite the same (Wegman et al, p. 17, Cmp p. 5):

An object with self-similarity is exactly or approximately similar to a part of itself. For example, many coastlines in the real world are self-similar since parts of them show the same properties at many scales. Self-similarity is a common property of many fractals

Our final example actually comes from the beginning of the section (Wegman et al, p. 15; PDF, p. 1).

Principal component analysis tries to reduce the dimensionality of this data set while also trying to explain the variation present as much as possible. To achieve this, the original set of variables is transformed into a new set of variables, called the principal components (PC) that are uncorrelated and arranged in the order of decreasing “explained variance.” It is hoped that the first several PCs explain most of the variation that was present in the many original variables.

Some readers might recognize the similar text from the Introduction of Ian Jolliffe’s classic Principal Component Analysis.

The central idea of principal component analysis (PCA) is to reduce the dimensionality of a data set consisting of a large number of interrelated variables, while retaining as much as possible of the variation present in the data set. This is achieved by transforming to a new set of variables, the principal components (PCs), which are uncorrelated and which are ordered so that the first few retain most of the variation present in all of the original variables. [p. 2] … but it is hoped, in general, that most of the variation in x will be accounted for by m PCs, where m << p.

At first glance, this example might be considered more of a paraphrased definition, and perhaps not as questionable as the other examples. On the other hand, consider the juxtaposition of the following identical key phrases, present in both versions:

  • “Principal component analysis”
  • “to reduce the dimensionality of”
  • “the variation present”
  • “a new set of variables”
  • “as much as possible”
  • “are uncorrelated”
  • “it is hoped”

A Google search on this set of phrases returns only a handful of hits, including the Jolliffe text itself, Wegman et al and a smattering of others attributing the passage to Jolliffe. It seems implausible, then, that this passage was not directly inspired by Jolliffe. As such, it definitely should have been attributed and probably the original should have been block-quoted.

Of course, PCA is at the heart of the McIntyre critique of the work of Mann, Bradley and Hughes and therefore of Wegman et al. The short description above refers to the possibility that the first few principal components (PCs) might “account for” or “explain” most of the variation in the original larger data set. This implies that enough PCs must be retained to accomplish this. Normally at least enough PCs to account for most of the original data set’s variance should be retained, and typically other conditions (such as convergence upon retention of successive PCs) would be imposed.

[Update, July 31: In fact, the 2002 edition of Jolliffe’s text (which remains the foremost refernce on PCA), contains a vastly expanded chapter on the topic of retention of PCs. It describes a number of rules, some “ad hoc” (but plausible and highly useful) and some statistically based. There’s even a section on the retention rules used in atmospheric sciences in which Preisendorfer and Mobley’s Principal Component Analysis in Meteorology and Oceanography figures prominently. Variations of “Preisendorfer’s rule N” are discussed.]

Tellingly, Wegman et al never once discuss this crucial aspect of PCA, even though a thorough examination of the issue of PC retention criteria was  a key element in the most extensive peer-reviewed critique of McIntyre and McKitrick’s work, namely that found in  Wahl and Ammann’s Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures (Climatic Change 2007).

But that is a discussion for another time. For now, I’ll conclude by presenting overall metrics that show some interesting contrasts between the various sections of chapter 2. These metrics  are based on word counts (WC).

The percentage of “strikingly similar” (SS) is simply based on the word count of passages with identified antecedents, relative to the overall word count. The next two columns show the percentage of combined identical and trivially changed text (ID+TC), and the percentage of identical text alone. The average identical text block length (BL) is calculated by ascertaining the total word count of identical text and dividing by the number of separate blocks of identical (ID) text in the target text. Thus it is an indicator of the amount of change and rearrangement that may have occurred.

Finally, the Src column shows the number of apparent antecedent sources and the Issues column shows the number of issues, both major (in bold) and minor (in normal typeface). As previously noted, these typically involve changes in meaning and errors.



SS ID+TC ID BL Src Issues

2.1 Tree rings [Cmp]


67% 51% 38% 2.9 1 8, 3

2.1 Ice cores & corals [Cmp]


93% 90% 65% 4.8 1 0, 1

2.2 PCA and stats  [Cmp]



28% 26% 4.3 9 0, 4

2.3 Social networks [Cmp]


87% 85% 76% 8.8 3 0, 8

As can be seen, all these metrics have also been generated for the previously analyzed background sections, along with links to the original discussions and updated colour-coded side-by-side comparisons.

The tree ring section is notable for the relatively large amount of added and changed material, both within and outside of the “strikingly similar” passages. This, along with the large number of major issues, could reflect the special attention paid to this section and the apparent introduction of key distortions that undermine the original source, Raymond Bradley’s classic Paleoclimatology: Reconstructing Climates of the Quaternary. (By the way, Chapter 10 is available once more online as a PDF at Keith Briffa’s web page).

In contrast, the section on ice cores and corals introduced many trivial changes, but with little change in meaning.

The social networks section (2.3) clearly has the most “strikingly similar” material. Moreover, the “striking similar” passages contain little new material, consisting almost entirely of identical or trivially changed text. The relatively large block size (almost 9 words) would appear to suggest that this section underwent less extensive editing, although numerous trivial changes are scattered throughout.

Finally, the PCA and noise model section discussed above clearly contains the least “strikingly similar” material. But the surprise here is that there is any at all. Not only that, but changes made by Wegman et al have apparently introduced errors. Moreover, the sheer number of apparent sources and relative brevity of the antecedent passages means that additional antecedents can not be ruled out.

Nevertheless, this likely brings to a close my examination of the unattributed sources in the background chapter of the Wegman report, now that I have covered each of the three sections from that perspective. But that does not mean we are done with the report as a whole, or even the background chapter – far from it.

Future posts will cover such topics as Wegman’s “trick to hide the deletion” of Wahl and Ammann’s critique of  McIntyre and McKitrick, as well as a discussion of the supposed “peer review” of the Wegman report, which Wegman claimed was “similar” to that of the National Research Council (which produced a competing report from a distinguished team led by Gerald North). I’ll also revisit the tree-ring section, but this time focusing on the serious issues raised by Wegman et al’s  changes and omissions, relative to Bradley’s original.

And, very soon now, John Mashey will present his exhaustive investigation of other aspects of “strange” scholarship in the Wegman Report, including a jaw-dropping analysis of the “Summaries of Important Papers” and a complete breakdown of all references and citations. So stay tuned; there’s plenty more on the way.



1. Edward J. Wegman, David W. Scott and Yasmin H. Said; Ad Hoc Committee Report on the “Hockey Stick” Reconstruction A Report to Chairman Barton, House Committee on Energy and Commerce and to Chairman Whitfield, House Subcommittee on Oversight and Investigations, 2006. [PDF]

2. Ian T. Jolliffe, Principal Component Analysis (Springer, 2nd ed. 2002)

3. Wikipedia article – Color of Noise (April 12, 2006 version) – Available online at: http://en.wikipedia.org/w/index.php?title=Colors_of_noise&oldid=48074859

4. Govindan Rangarajan, Mingzhou Ding (ed..), Processes with long-range correlations: theory and applications (Springer, 2003)

5. Wikipedia article – Self-similarity (Mar. 20, 2006 version) – Available online at: http://en.wikipedia.org/w/index.php?title=Self-similarity&oldid=44580086


Detailed comparisons of Wegman Report section 2.2 to these apparent antecedents, as well as other Wikipedia articles, are found in:

A comparison of Ad Hoc Committee Report (Wegman, Scott, Said) section 2.2, p.15-17 and Various unattributed sources on statistics and noise models  [PDF]


There are many types of noise with varying frequencies, each classified by a color.

54 responses to “Wegman Report update, part 1: More dubious scholarship in full colour

  1. That Wegman’s demolition of Mann’s shoddy statistics may be somewhat plagiarised, does not in any way invalidate his argument or revive the Hockey Stick.

    [DC:The problems go way beyond whether the report is “somewhat plagiarized”. All of the climate science and social network analysis is deeply flawed, and even the analysis of Mann’s use of PCA in reducing the North American data set avoids the substantive issue of PC retention.

    The essential picture of MBH98/99 has been confirmed and strengthened in the intervening years, including Mann’s own updated analyses which no longer rely on PCA at all.

    It’s also worth emphasizing that there is real prima facie evidence of research misconduct here, unlike the bogus “climategate” accusations. The real question is what will it take for GMU to order a complete independent investigation into Wegman and Said’s conduct. ]

  2. Gavin's Pussycat

    DC further to your remark on the lack of understanding in the principal author:

    Red noise in the paleoclimatology context comes from the fact that tree rings have correlation from year to year, that is, if a tree grows well in a given year, it will store carbohydrates and will tend to have a good year of growth the following year as well.

    Bring on the amateur botanists… not a word on climate itself being temporally correlated, e.g., due to El Nino with a dominant quasi-period of 3-4 years (which would strengthen the report’s argument FWIW).

    And BTW where would a tree store these carbohydrates, if not in the wood (i.e., tree rings)? In the underground bulb? 😉 And why would it store anything for the purpose of building future year rings instead of building them immediately?

    • Bradley, chapter 10, p. 402 explains it better than Wegman et al:

      Furthermore, climatic conditions prior to the growth period may “precondition” physiological processes within the tree and hence strongly influence subsequent growth (Fig. 10.5). For the same reason, tree growth and food production in one year may influence growth in the following year, and lead to a strong serial correlation or autocorrelation in the tree-ring record. Tree growth in marginal environments is thus commonly correlated with a number of different climatic factors in both the growth season (year t0) and in the preceding months, as well as with the record of prior growth itself (generally in the preceding growth years, t-1, and t-2). Indeed in some dendroclimatic reconstructions, tree growth in subsequent years (t+1, t+2 etc.) may also be included as they also contain climatic information about year to.

      [Note: The original used subscripts for year t0 and so on. ]

    • Credit where credit is due: It does appear to be their own words for better or worse.

  3. It’s amazing that people still think that the ‘Hockey Stick’ needs ‘reviving’, seeing as how so many studies since have confirmed it. If only those in denial would let go of this strange obsession they have with the original and try to get more up-to-date – they might learn a thing or two…especially about all the time they have wasted on nothing.

  4. JMurphy: Of course the deniers won’t let go of the hockey stick, and for a very simple reason: It’s an incredibly effective visual for communicating with newcomers. All deniers care about is delaying action on climate change as long as possible, and they do this by preventing members of the Reality Based Community (which includes real scientists and scientist wannabes, like me) from teaching newcomers about the basic facts of AGW.

    Therefore, the hockey stick MUST be attacked until it is defeated, no matter what cost the deniers have to pay with their own credibility.

  5. Awesome stuff DC. Clearly Wegman’s plagarism is not material to the veracity underlying argument that the report makes, (whether it is his own or not), although agreed that such is lacking as per your reply to the first comment. However, beyond the relevance of this work toward debunking the arguments expressed in the Wegman report, which is non-trivial, it is important that climate scientists not be the lone punching bags in this public drama. The skeptics are going to look great by comparison if real scientists are the ones under the microscope for every move they make. It’s long past time to pull back the veil on these actors to show the public what real fraudulence looks like.

    PS Is it wrong to secretly hope that someone successfully hacks into Steve McIntyre’s email archives one of these days and then uploads all the damning evidence to WUWT servers? Then again, knowing his Nixonian history, he’s probably by now already lit a match under it and reformatted all the attendant hard drives.

    [DC: It would be wrong for someone to hack McIntyre’s email. It’s not wrong to speculate that emails and other communication between Barton staffers on the one hand, and McIntyre, McKitrick, Weggman and/or Said on the other, would be very embarrassing. It wouldn’t surprise me, though, if Peter Spencer covered his tracks, so perhaps they are not in the official House email archives.

    But in the end, the important point is that mainstream journalists start asking the right questions, and calling the various actors to account. The Guardian crew let McIntyre get away with clear misrepresentations when he was in the U.K., and failed to ask the tough questions that need to be asked. ]

  6. PS Is it wrong to secretly hope that someone successfully hacks into Steve McIntyre’s email archives one of these days and then uploads all the damning evidence to WUWT servers?

    Joking aside, I do not secretly hope that and would condemn the act if it happened.

    I’ve been rooted once and didn’t appreciate it one bit.

  7. Actually, this work does bear on the underlying arguments, if you study the whole WR.

    Start from the bottom, i.e., the introduction (pp.13-22). people who understand material generally don’t do this sort of plagiarism. Even if they unconsciously repeat ideas or phrases (it happens), knowledgeable people don’t *create* bunches of errors.
    The same things happen on 25 of the 26 pages of Summarized Important Papers.

    Put another way 35 of 91 pages contain substantial material of “striking similarity”,l and yet, many of the opinions expressed elsewhere, or in testimony, are given with the assurance of knowledge.

    Actually, only a small fraction (~13 of 91 pages) of the WR is actually devoted to serious statistical work, [pp.28-37, pp61-63].

    p.34 is the infamous IPCC 1990 sketch, which they then re-use for several pages. A few pages [61-62] are fairly standard definitions. DC just showed that pp.15-17 had problems.

    The point is that all this introductory material and paper summaries, and the bizarre Bibliogpraphy seem designed to build an impression of expertise, so that they can offer lots of opinions.

    Wegman was still doing that in late 2007.
    I suggest taking a look at 2007 Meeting, where a climate/statistics workshop was held, with top-notch people. Peruse the other presentations, which strike me as high-quality discussions. Berger”s talk, esp. p.19 ahd some good material.

    Now, look at Wegman’s talk, delivered to excellent {statisticians with climate experience, climate scientists with good statistics skills}.

    Afew weeks later, he gave a similar talk @ GMU, with an interesting abstract, that noted climate scientists might be irritated … but it was basically good for them

    Here’s a fun exercise: study his talk and see if you find any problems…

    Again, this ties back to the plagiarism: some of these opinions seem to arise from that material.

    • Rattus Norvegicus

      “…any problems.” Besides the fact that most of his questions could have been answered by reading and understanding relevant textbooks? Besides the fact that he clearly did not know the basics of paleo reconstructions? Besides the fact that he must have felt like an idiot while giving this talk?

  8. The colour coding is definitely useful.

    I find the underline and strikeout to be visually quite similar in a quick scan, so I’d be tempted to either:

    a) colour-code the issues (although this is complicated when there’s an issue in a block of otherwise-coloured text), or

    b) add colour to the strikeout to visually distinguish it from the underlined text.

    But that’s just me.

    [DC: One thing to keep in mind is that the strikeout happens only in the source text, and the underline only in the changed text. And it would never be the same text, by definition. Still, if there were a lot of both it could be a problem.

    Option (a) doesn’t look practical, although I could experiment with changing the font colour, say to red . I think your option (b) might work too, again with changed font colour (no highlighting). ]

  9. Eek, the interesting abstract in my last post was supposed to be:

  10. Visual format is worth experimenting, but “how about this, how about that” is less useful than doing experiments and helping DC out.

    How about picking a paragraph or two with representative examples, putting up as a a .doc file, and letting people experiment, and send in ones they like.

    For example, the strikethroughs may or may actually be useful. They certainly are optional.
    I used to manage cognitive psychologists. if they didn’t *know* the answer, they tried real examples on people to find out.

    I originally started with grey highlighting, but when I wanted to add a second category, a different grey really didn’t work. DC suggested the cyan, which seemed better than the others in the limited palette, so I actually went through and changed many pages of this stuff, not an experience to repeat, since I couldn’t find any way but manually.

  11. Climategate showed irrefutable evidence of attempts to cook the books so as to prop up AGW. Your dismissal of this as a “fake” scandal simply shows your own dedication to science fraud in the pursuit of some other agenda.

    [DC: I see. So my dissection and debunking of the so-called “irrefutable evidence of attempts to cook the books” is itself proof of involvement in the conspiracy to commit “science fraud”. Got it.

    And, yes, this is off-topic on this thread, so let’s not continue this. Thanks! ]

  12. Gavin's Pussycat

    John, yep, the impression of expertise.

    He even includes easily-debunked talking points designed to fool folks without any relevant science background — and I get the feeling he actually believes them. Plain embarrassing before this audience. Yes, I believe his assertion that they were irritated 😉

  13. [DC:Off topic – I’ve discussed “climategate” and debunked McIntyre’s various accusations on several occasions, starting back here: https://deepclimate.org/2009/12/11/mcintyre-provides-fodder-for-skeptics/. Or search “Climategate” in the search box. Thanks!]

  14. Rattus,

    I would like to know what you mean by:

    > [M]ost of his questions could have been answered by reading and understanding relevant textbooks [.]

    • Rattus Norvegicus

      For example, his questions 1-4 should be answered, in the general sense, by reading (and understanding) any text on dendroclimatology. Questions 5 and 6 are well understood and answered in the literature (in fact, they’ve been around so long that the answers are probably in glaciology texts).

      He seems to have a problem with the use of PCA (justified in my opinion) but PCA is not used anymore in Mann’s studies and hasn’t been used since the mid 2000’s when he started looking at the performance of various reconstruction techniques. In this sense Wegman is clearly not up to speed on the current state of the art. Once again reading (and understanding) the literature would have helped him here.

      Questions 13, 14 and 15 are easily answered. Question 16 is just nonsense. The link in the models is very tight simply because CO2 is a greenhouse gas with well understood properties. OTOH, the link between the input forcing and the output temperature is not nearly so tightly coupled since perturbations to the GHE via additional CO2 forcing modify other climatic processes which eventually results in the output temperature which is a sum of the positive and negative feedback mechanisms triggered by the initial perturbation.

      Questions 17 and 18 are more useful and were discussed in Trenbreth’s session in a much more useful way.

      His questions about the surface record are straight out of Watt’s and D’Aleo.

      Quick call 911 and send the Climate Police! Or at least charge him with intellectual laziness.

      As Berger pointed out in his presentation, one of the main barriers to increased participation by statisticians in climate science is the time necessary to come up to speed in the new field and to adequately understand the statistical issues faced by the scientists he is working with. Judging from his PP deck, as of 2007 he hadn’t bothered to do this, but he had swallowed the denialist bait hook, line and sinker.

  15. [M]ost of his questions could have been answered by reading and understanding relevant textbooks

    … or would not have been asked.

    Wegman’s very first question (let’s call it question 0, since it occurs as a prelude to questions 1-4):

    How were the 70 trees in the NOAMER 1400 network selected?

    Think about it: could there have been only 70 trees in that network? Of course not, there were 70 proxy series (each of which contained varying amount of tree samples depending on the year).

    Here is Mann’s list of all the proxies used.

    Let’s pick the first Graybill chronology from ITRDB-North America list, AZ510.

    Here is the ITRDB page for AZ510.

    Here is the chronology for that series. Not sure how to read it? Don’t worry, I’m sure that Wegman doesn’t either. If you search for 1400 you see a line that starts

    SFP51914001112 151105 15

    That’s a six-character header followed by the decade start year (1400), then the chronology value for 1400 (1112) and the number of samples (15). And so on. Of course by 1980 we see more samples (26 as it happens).

    That’s one proxy series. But it’s not one tree sample, it’s fifteen.

    So Wegman didn’t even know the difference between a proxy tree sample and a site chronology (and, yes, that’s one of the issues in the tree ring background section referred to in the post).

    Here’s his summary of tree-ring proxies (slide 8):

    – tree ring size and density variations
    – best signal when trees are stressed
    . latitude and altitude

    Most paleos would refer to width. But much worse – the use of latitude or altitude stressed trees is for ring-width reconstructions, not density based reconstructions. Again, an important distinction that was flubbed in the Wegman Report background section on tree rings.

  16. DC,

    This kind of answer is, in my most humble opinion, very important, at the very least to show the level of misunderstanding. It also addresses the issues, which is more pedagogical than solely raising them.

    Many thanks!

  17. Rattus,

    Thank you for your answer. Readers should now get an idea why, somehow, Wegman is usually mentioned along the lines:

    > Under oath, North accepted Wegman’s conclusions.

    (Paraphrasing, of course.)

    Considering all that has been said so far, one can easily get the impression that Wegman & al tried to show how dendro was easy to master. That experiment was definitely not a success. If that is not possible, the idea that dendro is 90% statistics is moot.

    That leads to this argument:

    > One of the main barriers to increased prticipation by statisticians in climate science is the time necessary to come up to speed in the new field and to adequately understand the statistical issues faced by the scientists he is working with.

    This argument is very important. Applying statistics entails some interpretation of the underlying field. This interpretation is never trivial.

    A past Florida election reminds me of the difficulties of trying to resolve problems with pure statistics.

  18. In related news, Steve McIntyre takes academic misconduct very, very seriously indeed:

    Wahl and Ammann issued a press release saying that our results were “unfounded”; they admitted that our results on this point were right only after an academic misconduct complaint against Ammann.

    … specifically, plagiarism (two comments later):

    Wahl and Ammann mostly plagiarizes Mann’s 2004 submission to Nature and early realclimate posts.

    I guess these should be added to McIntyre’s list.

  19. willard says:
    Thank you for your answer. Readers should now get an idea why, somehow, Wegman is usually mentioned along the lines:

    > Under oath, North accepted Wegman’s conclusions.

    It’s true that North agreed with Wegman on the issue of short-centered PCA and inappropriateness of firm inferences concerning a specific year (1998) or decade.

    But on other matters, such as general characterization of Mann’s work and paleoclimatolgy in general, and supposedly problematic peer review in the field, North disagreed. Rather vehemently, in the latter case.

  20. DC,

    You’re quite right.

    My point was that North’s agreement on the very specific issue you mentioned might very well be the most important thing that contrarians can brag about in the whole hearing:


    (I suggest to everyone to read that. In my opinion, this event reads itself like a script. Anyone interested in theater plays will rejoice reading that.)

    As I see it, one could interpret your work regarding Wegman as showing two things. First, that this North acceptance might very well be the **only** thing contrarians can brag about in the whole hearing. Second, that this North acceptance is not much, unless we include slogans about SCIENCE.

  21. Jonathan Bagley

    If one is attempting to define long range dependent processes, it is inevitable that the same phrases and examples of applications will be used; and approximately in the same order. It’s not a story.

    You risk looking foolish if you persist in attacking Wegman. Just admit this PCA is a mistake.

    • If you google the exact phrases together, you will find the particular combination rare indeed.

      I have no doubt that the short centered PCA is legitimately crticized. But even there, there is no discussion of retention criteria. And the other 95% of the report is dubious as well.

  22. Here’s North — the video is his presentation given to Dessler’s class — explaining the Hockey Stick and Wegman hearing reports, first hand. Don’t miss it.

    [video src="http://geotest.tamu.edu/userfiles/216/NorthH264.mp4" /]

  23. Hank: by happy chance I have been looking at that,
    and for others, I just happen to have times for some interesting topics.

    It is also worth knowing that Grace Wahba is a heavily-cited statistician member of the National Academy of Science who has long interacted with climate scientists, i.e., one of the two most obviously-qualified “reviewers.”

    10:30 Barton-Whtifield letters
    11:30 Rep. Boehlert rebukes Barton, says get NRC
    15:00 Barton gets own committee
    16:30 Hockey stick first to try to do error bars, widely seen
    18:30 Best guess in 1990 IPCC report, chart shown often lately
    19:45 Wegman Report
    “We got to see it about 3 days before the Congressional hearing”
    20:00 Wegman, Scott, Said
    20:30 NRC Report, strong panel,
    21:40 NRC 12 Anonymous referees, 70 pages, 2 monitors to make sure every criticism answered
    22:15 Regarding WR referees, North paraphrases email from one:
    “What about their referee job? They claimed it was refereed but in fact they just sent it out to some friends right at the last minute. And in fact, one of them that it was sent to, Grace Wahba, who some of you may know at Wisconsin, she sent me an email and she says… Hey they used my name and they said I was a referee. He sent it to me about 3 days beforehand and I sent him a bunch of criticisms which they didn’t take into account.”
    22:45 WR network analysis, comments about coauthorship, statisticians
    25:45 Discussion of temperature reconstructions
    29:30 MM got PC right, but did it make any difference?
    40:30 Ice core records, low-latitudes, like hockey stick
    42:00 Glacier lengths, hockey stick
    42:50 Boreholes, corals
    45:50 Forcings, CO2, CH4
    47:00 Sunspots
    49:00 Volcanoes
    50:00 Other reconstructions, new studies
    51:40 Spaghetti curve, look at envelope
    54:15 Put spaghetti with hockey stick error bars
    56:50 30-year averages warmest, 400 years likely, 1000 years plausible
    58:15 end of talk
    58:50 MWP likely varied globally
    01:01:50 LIA seems more global
    01:03:00 Does it have anything to do with AGW? No key = physics

  24. Once again, Hank providing good links!

    Do we know if Grace Wahba still has the criticisms that were ignored by Wegman?

  25. DC,

    Excellent expose of Wegman et al., and of McIntyre on the most recent thread.

    I am still dumbfounded that we all seem to think that GMU is going to move on this on its own or that a journalist is serendipitously going to stumble on your series and actually run with it.

    One complaint against Wegman and Said may have been filed according to someone posting here, but what is needed is a formal complaint from one or more persons.

    I really do not understand the reluctance of Rabett and others to follow though on this (it is my understanding that those of us in Canada probably cannot lodge a complaint, no?) , and make efforts to ensure that this story gets traction in the media. What am I missing?

    PS: I know of a journalists in Canada who may consider running an expose on the antics/politicking McIntyre and CA. Not a National paper, but maybe a start?

  26. Rattus July31

    1) Happily, your list of issues matched mine quite well regarding Wegman’s NCAR presentation. If you haven’t read the abstract of his alter talk top GMU statisticians, it’
    s worth seeing.

    2) MapleLeaf:
    “no worries”? sound Australian to me.
    We hear that all the time from the staff at Big White.


  27. Hi John,

    Nope, not an Aussie. Yes, a lot of Aussies work the lifts in the Rockies and BC coastal range. Guess it is one way of escaping the blistering Australian heat.

    Looking forward to your expose John! Will you be in our part of the world this winter to ski?

  28. It looks to me that Wegman did not go far enough in slamming Mann, Bradley and Hughes. But not to worry because McShane and Wyner take up the issue in the Annals of Applied Statistics and put the nail in the Hockey Stick coffin once and for all. I guess that you boys and girls will have to try to make up some more stuff to try and hide the obvious.


  29. Yep, we’re busy making it up right now, with the 1:1000+ warming event in Russia apparently convincing the government to believe a little bit in science

    Which you, obviously, do not.

    • The difficulty though is identifying what is really science and what is merely advocasy masquerading as science. And having the job title “scientist” is no help here whatsoever. A general guideline for the layman, is to ask
      (a) who is sponsoring the science?
      (b) whose interests do the conclusions favour?
      And be particularly careful where the two turn out to be the same, no matter who it is.

    • Rene,
      Here’s one clue. Science is in the corpus of peer reviewed scientific literature. Not in reports to congress commissioned with clear political motivation.

  30. Statistically, what are the chances of some denialist not prematurely crowing that the latest canard de jour is yet another final nail in the coffin?

  31. It does very much seem that because Bradley cannot counter the Wegman criticism of the PC methods in the MBH papers, and is unwilling to face up to this, he now seeks some tangential method of somehow sidelining the Wegman critique, especially now with the Republicans likely to make gains.

    More generally, considering the academic odure Bradley hoped would be heaped on Wegman, it’s a clear warning shot over the bows of career climatologists, should they ever get out of step from the consensus/party line.

    [DC: The statistical criticisms of MBH98/99 raised in the Wegman report were already dealt with and assessed in the peer-reviewed scientific literature months before the non-peer reviewed Wegman report was released, as seen in the NAS/NRC report and the in press version of Wahl and Ammann (2007). In this area, Wegman et al raised no new substantive criticisms. And his discussion of them was biased and excluded many of the actual issues and relevant scientific literature.

    However, Wegman’s unscientific critique of paleoclimatology and his assertion of supposed failure of peer review in the field are simply untenable. The shoddy scholarship and evident bias are clear indicators of the deep flaws in the Wegman report.

    This should be a warning all right. It is this: Scientists are drawing a line – politically motivated attacks on science and scientists are unacceptable. ]

    • Gavin's Pussycat

      Rene, you’re projecting. Just because you prefer political games over science, doesn’t mean everybody does. Here’s news for you: some folks actually care deeply about getting the science right, and getting the right science out. Ray Bradley is one of them.

    • The plagiarism issue is nothing *but* Bradley playing political games.

    • Gavin's Pussycat

      Rene, the red pill.

  32. What scientists say is not necesarily science. Scientists are just as liable as anyone else to be politically motivated; expertise does not equal integrity or honesty, as recent exposes indicate. The attacks you refer to are an attack on what presents itself as science, but appears to some to be politically skewed.

    • No, the attacks are on the science and its credibility and integrity. If there are flaws or gaps in published science, they should be (and are) addressed in the scientific literature.

      On the available evidence, the lack of integrity and honesty is a characteristic of the attackers, not the scientists. And with that, I must regretfully conclude discussion of your repeated, unsubstantiated assertions to the contrary. It is becoming quite tiresome. Thanks!

  33. Rene is just McIntyres sock puppet.
    Just Walk Away Rene.

    If you dont understand you are tooooo young.

  34. Yes, as you say : much of mainstream climate science, its process and literature, is under attack for alleged lack of integrity, political bias etc. This may well seem tiresome, but ignoring or suppressing comment on it will not make it go away; if anything it will be taken as a tacit admission of guilt.
    And yes, gaps and errors *should* be addressed in journals. That though is not the end of the matter, since journal editing itself may lack integrity and be baised. It is no argument to just assume knowledge brings with it integrity; science is as corruptable as anything else.

    [DC: No one is “ignoring or suppressing comment on it”, even your repetitive and unsubstantiated remarks. In fact, the vacuity and lack of evidence over your position has been clearly demonstrated over and over. Even within the journals the evidence is overwhelmingly that it is the “skeptics” who have engaged in questionable practices; for example, as in the case of Chris de Freitas at Climate Research. If you disagree, it is up to you to adduce actual evidence to the contrary and comment on the appropriate thread.

    In particular, Steve McIntyre has made many accusations of misconduct and plagiarism against climate scientists, and yet not one has been borne out. The accusations against Raymond Bradley, along with Wahl and Ammann, are only the latest raised on extremely flimsy or non-existent evidence. Meanwhile McIntyre and Watts pretend that the only evidence of misconduct against Wegman and Said rests on one or two copied paragraphs, rather than the 35 pages of the Wegman report identified so far. Not to mention the clear evidence with regard to the federally funded Said et al 2008.

    Even worse, the Republican claim that the Wegman report was “independent” and “peer-reviewed” has been demonstrated to be utterly false. The material reviewed by the Wegman panel was supplied by Barton committee staffer Peter Spencer, and the Wegman report studiously avoided substantive discussion of the only three peer-reviewed commentaries on McIntyre and McKitrick extant in the scientific literature.

    The silence on the real issues is deafening. That is surely a “tacit admission” – on the part of McIntyre, Watts and their followers.

    I’ll close with a reminder to you to read the comment policy and follow it. Continued repetition of the same old unsubstantiated points, or continued off-topic comments, will be dealt with appropriately. Very cheerfully. Thanks!]

  35. ‘Rene’ is well crafted and heavily scripted ‘astroturf’.

    You guys need to get more sophisticated quantum neurolyzers!

    • Gavin's Pussycat

      Thomas Lee, heavily scripted, sure. But well-crafted? I would do a better job of it if I could live with not looking in the mirror.

  36. [DC: I’ve warned you repeatedly. If you want to comment here, discuss the specific issues on the appropriate thread. Thanks!]

  37. Pingback: George Mason University’s endless inquiry | Deep Climate