Replication and due diligence, Wegman style

By Deep Climate

Today I continue my examination of the key analysis section of the Wegman report on the Mann et al “hockey stick” temperature reconstruction, which uncritically rehashed Steve McIntyre and Ross McKitrick’s purported demonstration of the extreme biasing effect of Mann et al’s “short-centered” principal component analysis.

First, I’ll fill in some much needed context as an antidote to McIntyre and McKitrick’s misleading focus on Mann et al’s use of principal components analysis (PCA) in data preprocessing of tree-ring proxy networks. Their problematic analysis was compounded by Wegman et al’s refusal to even consider all subsequent peer reviewed commentary – commentary that clearly demonstrated that correction of Mann et al’s “short-centered” PCA had minimal impact on the overall reconstruction.

Next, I’ll look at Wegman et al’s “reproduction” of McIntyre and McKitrick’s  simulation of Mann et al’s PCA methodology, published in the pair’s 2005 Geophysical Research Letters article, Hockey sticks, principal components, and spurious significance).  It turns out that the sample leading principal components (PC1s) shown in two key Wegman et al figures were in fact rendered directly from McIntyre and McKitrick’s original archive of simulated “hockey stick” PC1s. Even worse, though, is the astonishing fact that this special collection of “hockey sticks”  is not even  a random sample of the 10,000 pseudo-proxy PC1s originally produced in the GRL study. Rather it expressly contains the very  top 100 – one percent – having the most pronounced upward blade. Thus, McIntyre and McKitrick’s original Fig 1-1, mechanically reproduced by Wegman et al, shows a carefully selected “sample” from the top 1% of simulated  “hockey sticks”. And Wegman’s Fig 4-4, which falsely claimed to show “hockey sticks” mined from low-order, low-autocorrelation “red noise”, contains another 12 from that same 1%!

Finally, I’ll return to the central claim of Wegman et al – that McIntyre and McKitrick had shown that Michael Mann’s “short-centred” principal component analysis would mine “hockey sticks”, even from low-order, low-correlation “red noise” proxies . But both the source code and the hard-wired “hockey stick” figures clearly confirm what physicist David Ritson pointed out more than four years ago, namely that McIntyre and McKitrick’s “compelling” result was in fact based on a highly questionable procedure that generated null proxies with very high auto-correlation and persistence. All these facts are clear from even a cursory examination of McIntyre’s source code, demonstrating once and for all the incompetence and lack of due diligence exhibited by the Wegman report authors.

Before diving into the details of Wegman et al’s misbegotten analysis, it’s worth reminding readers of the context of principal component analysis (PCA) in the Mann et al methodology of multi-proxy temperature reconstruction.

First of all, it should be noted that PCA was used in data pre-processing to reduce large sub-networks of tree-ring proxies to a manageable number of representative principal components, which were then combined with other proxies in the final reconstruction. The PCs retained together reflect the climatological information in the proxy sub-network; the actual number required depends on the size of the sub-network (which itself changes depending on the time period or “step” under consideration). But it also depends on the details of  PCA procedure itself.

In their 2005 GRL article, McIntyre and McKitrick (hereafter M&M) purported to have discovered a major flaw in the Mann et al methodology. Instead of standardizing each proxy series on the mean of the whole series before transforming into a set of PCs, Mann et al standardized on the mean during the instrumental calibration period. McIntyre and McKitrick claimed that the “short-centred” method, when applied to so-called “persistent red noise”, nearly always produces a hockey stick shaped first principal component (PC1).  Furthermore, M&M focused on the PC1 produced for the North American (NOAMER) tree ring proxy sub-network in the 1400-1450 “step” of the MBH reconstruction.  According to M&M, this PC1 was “essential” to the overall reconstruction; thus its “correction” demonstrated  that the original “hockeystick” reconstruction was “spurious”.

While M&M clearly identified a mistake (or, at best, a poor methodological choice), critiques of M&M pointed out two major problems with the M&M analysis. First, it turned out that M&M had left out a crucial step in their emulation of the MBH PCA methodology,  namely the restandardization of proxy series prior to transformation (an issue first raised in Peter Huybers’s published comment). Second, and even more importantly, M&M had neglected to assess the overall impact of any bias engendered by Mann et al’s PCA. In particular, they failed to take into account any criterion for the number of PCs to be retained in the dimensionality reduction step, as well as the the impact of combining all of the retained PCs with the other proxies.

These issues (and more) were treated comprehensively by Wahl and Ammann’s Robustness of the Mann, Bradley, Hughes reconstruction of Northern Hemisphere surface temperatures (Climatic Change, 2007), a paper that was first in press and available online in early 2006. They found that variants of PCA applied to NOAMER tree-ring network had minimal impact on the final reconstruction, as long as the common climatological information in the proxy set was retained. In effect, “short-centered” PCA may have promoted “hockey stick” patterns in the proxy data to higher PCs, but these patterns were still present if all the PCs  necessary to account for sufficient explained variance were retained.  This paper, along with several others, was cited by the National Research Council’s comprehensive report on paleclimatology as demonstrating that the “MBH methodology does not appear to unduly influence reconstructions of hemispheric mean temperature” (Surface Temperature Reconstructions for the Last 2,000 Years, p. 113).

But, as we have seen before, Wegman et al deliberately excluded substantive consideration of the peer-reviewed literature in their analysis of M&M, even misrepresenting the work  of Wahl and Ammann in a flimsy excuse to exclude it from substantive consideration.

MM05a was critiqued by Wahl and Ammann (2006) and the Wahl et al. (2006) based on the lack of statistical skill of their paleoclimate temperature reconstruction. Thus these critiques of the MM05a and MM05b work are not to the point.

Instead, Wegman et al appeared to take at face value all of M&M’s assertions, including the conflation of the NOAMER PC1 with the overall reconstruction, as is very clear from the reproduction of M&M Fig. 1 in Wegman et al 4.1.

Figure 4.1: Reproduced version of Figure 1 in McIntyre and McKitrick (2005b). Top panel is PC1 simulated using MBH 98 methodology from stationary trendless red noise. Bottom panel is the MBH98 Northern Hemisphere temperature index reconstruction.

Discussion: The similarity in shapes is obvious. As mentioned earlier, red noise exhibits a correlation structure, which, although it is a stationary process, to will depart from the zero mean for minor sojourns. However, the top panel clearly exhibits the hockey stick behavior induced by the MBH98 methodology.

Now let’s dig into the implications of that figure and the subsequent “hockey stick” festival of Fig 4.4, in light of a review of McIntyre’s R script, archived as part of the available supplementary information of M&M 2005 at the AGU website. Recall that M&M had created 10,000 simulations of the NOAMER PC1 from null proxy sets created from “persistent red noise” (mistakenly assumed to be conventional red noise by Wegman et al, as seen above). This type of Monte Carlo test is often used to benchmark temperature reconstructions, although McIntyre’s method of creating “random” null proxies was unusual and highly questionable, as we shall see.

The first thing I noted in my original discussion was that,  although the PC1 shown was supposedly a “sample”, it was identical in both the M&M and Wegman et al figures. How did that happen? A quick peek at the code gives the answer – this PC1 is read from an archived set of PC1s previously stored, and is #71 of the set. The relevant lines reads:

hockeysticks<-read.table(file.path(url.source,"hockeysticks.txt"),
  skip=1)
...
plot(hockeysticks[,71],axes=FALSE,type="l",ylab="",
   font.lab=2)

Sure enough, the M&M ReadMe confirms this:

2004GL021750-hockeysticks.txt.  Collation of 100 out of 10000 simulated PC1s, dimension 581 x 100

The eagle-eyed reader will note the code doesn’t have the prepended archive name, one error among many that Wegman et al would have had to correct with McIntyre’s assistance (the same mistakes that Peter Huybers had fixed, apparently on his own, some months before in his version of the script). Indeed, the script saves a set 100 simulation PC1s every time, and so this archived set is a particular group that was saved at some point.

However, the more interesting question is this: Exactly how was this sample of 100 hockey stick PC1s selected from the 10,000? That too is answered in the script code.

As the simulation progresses, statistics are gathered on each and every simulated PC1. One of those statistics is McIntyre’s so-called “hockey stick index” or HSI, which in effect measures the length of the blade of each PC1 “hockeystick”. An HSI of 1 means that the mean of the calibration period portion of the series is 1 standard deviation above the mean of the whole series.

The HSI for each PC1 (themselves numbered from 1 to 10,000) is stored in an array called stat2, as seen in this code snippet that renders the overall results.

#HOCKEY STICK FIGURES CITED IN PAPER
#MANNOMATIC
#simulations are not hard-wired and results will differ slightly in each run
#the values shown here are for a run of 10,000 and results for run of 100 need to be multiplied

temp<-(stat2>1)|(stat2< -1); 	  sum(temp) #[1] 9934
temp<-(stat2>1.5)|(stat2< -1.5);  sum(temp) #[1] 7351
temp<-(stat2>1.75)|(stat2< -1.75);sum(temp) #[1] 2053
temp<-(stat2>2)|(stat2< -2); 	  sum(temp) #[1]  25

The comments show that more than 99% of the PCs have a hockey stick index with an absolute value greater than 1, and a few even have the extreme values of 2.

Now, was a random sample of these PC1s saved? Or perhaps just the first 100 (which would also be reasonably random)? Not quite.

############################################
#SAVE A SELECTION OF HOCKEY STICK SERIES IN ASCII FORMAT
order.stat<-order(stat2,decreasing=TRUE)[1:100]
order.stat<-sort(order.stat)

hockeysticks<-NULL
for (nn in 1:NN) {
  load(file.path(temp.directory,paste("arfima.sim",nn,"tab",sep=".")))
   index<-order.stat[!is.na(match(order.stat,(1:1000)+(nn-1)*1000))]
   index<-index-(nn-1)*1000
   hockeysticks<-cbind(hockeysticks,Eigen0[[3]][,index])
} #nn-iteration

dimnames(hockeysticks)[[2]]<-paste("X",order.stat,sep="")
write.table(hockeysticks,file=file.path(url.source,
   "hockeysticks.txt"),sep="\t",quote=FALSE,row.names=FALSE)

The first line sorts the set of PC1s by descending HSI and then copies the first 100 to another array. That array is then sorted by PC1 index so that each PC1  selected for the archive can be retrieved from the appropriate temporary file and saved in ASCII format.

Recall M&M’s description of the “sample” PC1 in figure 1 (Wegman et al 4.1):

The simulations nearly always yielded PC1s with a hockey stick shape, some of which bore a quite remarkable similarity to the actual MBH98 temperature reconstruction – as shown by the example in Figure 1.

That’s “some” PC1, all right. It was carefully selected from the top 100 upward bending PC1s, a mere 1% of all the PC1s.

To confirm these findings, I downloaded the archive “sample” set of PC1s.  First I plotted good old # 71, which is can be readily seen is identical to the”sample” PC1 in McIntyre’s original Fig 1 (and Wegman et al 4.1).

And here are the corresponding identical panels from M&M Fig 1 and Wegman et al Fig 4.1:

I then calculated the “Hockey stick index” for each of the 100 and confirmed that they all had HSI greater than 1.9. Also, 12 of the 100 had an HSI above 2, which jibes with the totals given by McIntyre in the comments and the main article (presumably there were another 13 severely downward PC1s with HSI less than -2).

It turns out that #71 is not too shabby. Its HSI is 1.97, which is #23 on the HSI hit parade, out of 10,000.

I then turned to Fig 4.4, which presented 12 more simulation PC1 hockey sticks. Although this figure was not part of the original M&M article, there is a fourth figure generated in the script, featuring a 2×6 display, just like the Wegman figure. A quick perusal of the code shows that these too were read from McIntyre’s special 1% collection, although a different selection of 12 PC1s would be output each time.

hockeysticks<-read.table(file.path(url.source,
    "2004GL021750-hockeysticks.txt"),sep="\t",skip=1)
     postscript(file.path(url.source,"hockeysticks.eps"),
     width = 8, height = 11,
     horizontal = FALSE, onefile = FALSE, paper = "special",
     family = "Helvetica",bg="white",pointsize=8)

nf <- layout(array(1:12,dim=c(6,2)),heights=c(1.1,rep(1,4),1.1))
...
index<-sample(100,12)
plot(hockeysticks[,index[1]],axes=FALSE,type="l",ylab="",
      font.lab=2,ylim=c(-.1,.03))

To confirm this, I set up a dynamic chart in Excel, and scrolled through to find the first PC1 displayed (in the upper left hand corner). Here is the dynamic chart in action, with #35 selected, followed by a close up of the matching top left PC1 in Wegman et al Fig 4.4.

Two more scrolls through and I had identified all 12 (independently corroborated by another correspondent). So here is Fig 4-4, with each “hockey stick” identified by its position within McIntyre’s top 1%, as well as its original identifier (an index between 1 and 10,000).

And as verification, I reran the final part of the M&M script, but with a small change to coerce display of the same 12 PC1 “hockey sticks” as Wegman et al had shown.

### index<-sample(100,12)
index = c(35,14,46,100,91,81,49,4,72,54,33)

Here is Wegman et al Fig 4-4 side by side with the resulting reproduction of the 12 hockey stick figure from the M&M script. They are clearly identical (although the Wegman et al version is not as dark for some reason).

Wegman et al 4.4 (L) and  12 identified  PC1s from the top 1% archive (R).

Naturally, the true provenance of Fig 4-4 sheds a harsh light on Wegman et al’s wildly inaccurate caption:

Figure 4.4: One of the most compelling illustrations that McIntyre and McKitrick have produced is created by feeding red noise [AR(1) with parameter = 0.2] into the MBH algorithm. The AR(1) process is a stationary process meaning that it should not exhibit any long-term trend. The MBH98 algorithm found ‘hockey stick’ trend in each of the independent replications.

As was pointed out long ago by David Ritson (and discussed here recently), this greatly overstates what McIntyre and McKitrick actually demonstrated. To fully understand this point, it is necessary to quickly review the statistical models  underpinning paleoclimatological reconstructions. Typically, proxies are considered to contain a climate signal of interest (in this case temperature), combined with noise representing the effects of non-climatic influences. This non-climatic noise is usually modeled with a low-order autoregressive (AR) model, meaning that the noise in a given year is correlated to that in immediately preceding years. Specifically, an AR model of order 1, commonly called “red noise”, specifies that values at time t in the time series be correlated with the immediately preceding values at time t-1. The amount of this auto-correlation is given by the lag-one coefficient parameter.

One of the common uses of AR noise models is to benchmark paleoclimatological reconstructions. In this procedure, random “null” proxy sets are generated and their performance in “reconstructing” temperature for part of the instrumental period is used to establish a threshold for the statistical skill of reconstruction from the real proxies. As estimated through various methods, the lag-one correlation coefficient used to generate the random null proxy sets is usually set between 0.2 and 0.4. (Mann et al 2008 used 0.24 and 0.4).

As we have seen above, M&M employed a similar concept to generate simulated PC1s from random noise proxies. (Confusingly, M&M called their null proxies “pseudo-proxies”, a term previously  employed for test proxies generated by adding noise to simulated past temperatures). However, M&M’s null proxies were not generated as AR1 noise as claimed by Wegman et al, but rather by using the full autocorrelation structure of the real proxies (in this case, the 70 series of the 1400 NOAMER tree-ring proxy sub-network).

The description of the method was somewhat obscure (and completely misunderstood by Wegman et al). But we can piece it together by following the code.

if (method2=="arfima")
{Data<-array(rep(NA,N*ncol(tree)), dim=c(N,ncol(tree)));
	for (k in 1:ncol(tree)){
	    Data[,k]<-acf(tree[,k][!is.na(tree[,k])],N)[[1]][1:N]
	}#k
} #arfima
...
if (method2=="arfima")
{N<-nrow(tree);
	b<-array (rep(NA,N*n), dim=c(N,n) )
	for (k in 1:n) {
		b[,k]<-hosking.sim(N,Data[,k])
	}#k
}#arfima

The ARFIMA notation refers to a more complicated three-part statistical model (the three parts being AutoRegressive, Fractional Integrative and Moving Average). It is a generalization of the  ARIMA (autoregressive integrative moving average) model, itself an extension of the familiar  ARMA (autoregressive moving average) model. ARFIMA permits the modeling, with just a few parameters, of “long memory” time series exhibiting high “persistence”. The generalized ARFIMA model was presented by J. R. M. Hosking in his 1981 paper, Fractional Differencing (Biometrika, 1981). A subsequent paper, Modeling persistence in hydrological time series using fractional differencing (Water Resources, 1984), outlined a method to derive a particular ARFIMA model from the  full autocorrelation function of a time series, and generate a corresponding random synthetic series based on the ARFIMA parameters derived from that autocorrelation structure.

So first, each of the 70 tree-ring proxies used by Mann et al was analyzed for its complete individual auto-correlation structure, using the acf() function. Then within each of the 10,000 simulation runs, the Hosking algorithm (as implemented in the hosking.sim R function) was used to generate a set of 70 random “null” proxies, each one having the same auto-correlation structure, represented by its particular ARFIMA parameters, as its corresponding real proxy.

Naturally, this raises a number of issues that were not addressed by Wegman et al, since the authors completely failed to understand the actual methodology used in the first place. (This abject failure can be explained in part, if not excused, by M&M’s misleading description of their random proxies as consisting of “trendless” or “persistent” red noise, a nomenclature found nowhere else). Foremost among these is McIntyre and McKitrick’s implicit assumption that the “noise” component of the proxies, absent any climatic signal, can be assumed to have the same auto-correlation characteristics as the proxy series themselves.  But as von Storch et al (2009) observed:

Note that the fact that such [long-term persistence] processes may have been identified in climatic records does not imply that they may also be able to represent non-climatic noise in proxies.

Even if Wegman et al completely missed the real issues, other critics did not fail to point out problems with the M&M null proxies, as I have discussed before. Ammann and Wahl (Climatic Change, 2007),  observed that by using the “full autoregressive structure” of the real proxies, M&M “train their stochastic engine with significant (if not dominant) low frequency climate signal”. And, as we have already seen, David Ritson was aware of the true M&M methodology back in November 2005, and pointed out M&M’s “improper” methodology to Wegman et al within weeks of the Wegman report.

Indeed, the real reasons Wegman et al never released “their” code nor associated information are now perfectly clear. Doing so would have amounted to an admission that the supposed “reproduction” of the M&M results was nothing more than a mechanical rerun of the original script, accompanied by a colossally mistaken interpretation of M&M’s methodology and findings.

In any event, under attack by both Ritson and Ammann and Wahl, Steve McIntyre claimed several times that an extreme biasing effect had been demonstrated by using AR1 noise instead of the dubious ARFIMA null proxies, by both the NRC report and Wegman et al. As late as September 2008, McIntyre proclaimed:

The hosking.sim algorithm uses the entire ACF and people have worried that this method may have incorporated a “signal” into the simulations. It’s not something that bothered Wegman or NAS, since the effect also holds for AR1, but Wahl and Ammann throw it up as a spitball.

However, this claim is also highly misleading. First, as is now clear, McIntyre’s reliance on Wegman et al is speciously circular; Wegman et al mistakenly claimed that it was McIntyre and McKitrick who had demonstrated the “hockey stick” effect with AR1 noise, and certainly provided no evidence of their own.

So that leaves the NRC. It’s true that NRC did provide a demonstration of the bias effect using AR1 noise instead of ARFIMA. But it was necessary to choose a very high lag-one coefficient parameter (0.9) to show the extreme theoretical bias of “short-centered” PCA.  Indeed, that high parameter was chosen by the NRC expressly  because it represented noise “similar” to McIntyre’s more complex methodology.

To understand what a huge difference the choice of AR1 parameter can make, recall David Ritson’s formula for estimation of persistence (or “decorrelation time”) in AR1 noise, which in turn is based on the exponential decay in correlation of successive terms in the time series. The “decorrelation time” is given by (1 + phi)/(1 – phi).

Using this formula, one can calculate a persistence of 19 years within AR1(.9) noise, as opposed to a mere 1.5 years with the AR1(.2) noise claimed by Wegman et al to produce extreme “hockey stick” PC1s. And, as one might expect, rerunning the NRC code with the lower AR1 0.2  parameter yields dramatically different results.

The first chart (at left) is a rerun of the NRC code (available here); the second chart (at right) is the same code but with the AR1 parameter set to 0.2, instead of 0.9, by simply adjusting one line of code:

phi <- 0.2;

5 PC1s generated from AR1(.9) (left) and AR1 (.2) (right) red noise null proxies.

So Wegman et al’s “compelling” demonstration is shown to be completely false; the biasing effect of “short-centered” PCA is much less evident when applied to AR1(.2),  even when viewing the simulated PC1s in isolation. To show the extreme effect claimed by McIntyre, one must use an unrealistically high AR1 parameter. This is yet one more reason that the NRC’s  ultimate finding on the matter, namely that “short-centered” PCA did not “unduly influence” the resulting Mann et al reconstruction, is entirely unsurprising.

In summary, then, I have shown in Wegman et al a misleading focus on one particular step of the Mann et al algorithm, accompanied by a refusal to substantively consider the the peer-reviewed scientific critiques of M&M.

And to top it all, Wegman et al flatly  stated that the biasing Mann et al algorithm would produce “hockey stick” reconstructions from low-order, low-autocorrelation red noise, while displaying a set of curves from the top 1% of “hockey sticks” produced from high-persistence random proxies. Those facts are clearly apparent from McIntyre and McKitrick’s source code, as modified and run by Wegman et al themselves.

Make no mistake – this strikes at the heart of Wegman et al’s findings, which held that the writing of Mann et al was “somewhat obscure”, while McIntyre and McKitrick’s “criticisms” were found to be “valid” and their arguments  “compelling”. And yet the only “compelling illustration” offered by Wegman et al was the supposed consistent production of “hockey sticks” from low-correlation red noise by the Mann et al algorithm. But the M&M simulation turned out to be based on nothing of the kind, and, to top it all, showed only the top 1% of simulated “hockey stick” PC1s.

So there you have it: Wegman et al’s  endorsement of McIntyre and McKitrick’s “compelling” critique rests on abysmal scholarship, characterized by deliberate exclusion of relevant scientific  literature, incompetent analysis and a complete lack of due diligence. It’s high time to admit the obvious: the Wegman report should be retracted.

[Update, November 22: According to USA Today’s Dan Vergano, a number of plagiarism experts have confirmed that the Wegman Report was “partly based on material copied from textbooks, Wikipedia and the writings of one of the scientists criticized in the report”. The allegations, first raised here in a series of posts starting in December 2009 (notably here and here), are now being formally addressed by statistician Edward Wegman’s employer, George Mason University. ]

index<-sample(100,12)
About these ads

226 responses to “Replication and due diligence, Wegman style

  1. Thanks for putting together this comprehensive review.

  2. Nice detective work again, DC. I think it is always a good idea when posting about hockey sticks and Wegman, to remind readers that:

    The hockey stick-shape temperature plot that shows modern climate considerably warmer than past climate has been verified by many scientists using different methodologies (PCA, CPS, EIV, isotopic analysis, & direct T measurements). Consider the odds that various international scientists using quite different data and quite different data analysis techniques can all be wrong in the same way. What are the odds that a hockey stick is always the shape of the wrong answer?

    Any word on the investigation of Wegman?

  3. They are clearly identical (although the Wegman et al version is not as dark for some reason).

    Looks like it’s just higher resolution (increase the width and height arguments).

    That suggests Wegman spent more time understanding the plotting code than he did on the actual analysis.

  4. Thanks for this nice post! Would it be difficult to actually plot 10 random hockeystick PC’s out of the 10000 generated, say the first 10. (or maybe in the sorted list #50,150,250 etc.) I am curious how much “hockey-stick” shape is in there, because I don’t have a clue how to interpret the HSI exactly.

  5. Gavin's Pussycat

    It might be an idea to add an AR1 (.4) plot too…

  6. Sterling work DC.

    Your deep digging continues to demonstrate not only just how deep a hole Wegman et al dug, in their turn, for themselves in the production of their ‘report’, it nicely summarises just how spurious the whole fuss about M&M was from the start.

    As so many commenters are observing, the chickens are now coming home to roost for those of the Denialati who crowed so loudly and for so long about ‘poor science’. There is surely growing discomfort in more than one contrarian coop.

  7. Thanks for uncovering the “The Wegman Illusion”.

    Great work, as always, DC.

  8. From the main essay:

    As the simulation progresses, statistics are gathered on each and every simulated PC1. One of those statistics is McIntyre’s so-called “hockey stick index” or HSI, which in effect measures the length of the blade of each PC1 “hockeystick”. An HSI of 1 means that the mean of the calibration period portion of the series is 1 standard deviation above the mean of the whole series.

    The HSI for each PC1 (themselves numbered from 1 to 10,000) is stored in an array called stat2, as seen in this code snippet that renders the overall results….

    Now, was a random sample of these PC1s saved? Or perhaps just the first 100 (which would also be reasonably random)? Not quite.

    [computer code]

    The first line sorts the set of PC1s by descending HSI and then copies the first 100 to another array. That array is then sorted by PC1 index so that each PC1 selected for the archive can be retrieved from the appropriate temporary file and saved in ASCII format.

    This would appear to be a heck of a cherry pick! A hundred of them, actually.

    Wouldn’t cherry picking be covered in an introduction to statistics course 101? I never did I’m sorry to say, but I assume both McIntyre and Wegman would have taken something like that at some point.

  9. The comparison of PCs from AR(.9) and AR(.2) are a little confusing because of the change of scale between the two plots (eg, I could imagine the plots of the left looking not out-of-place if overlayed on the plots on the right).

    (Mind you, this was something that I also thought was odd about the Wegman report: they showed these hockey stick PCs with tiny absolute magnitudes from “red noise” and compared them to an order-of-magnitude larger hockey stick PCs from Mann et al., as in Figure 4.1)

    -M

  10. Boom. So Wegman not only plagiarized the social network analysis and background text, he also plagiarized the core statistical analysis- the crux of his supposed value add- albeit presumably with the full awareness and tacit (if not explicit) consent of its original authors. I’m reminded of the Cutter character in “Clear and Present Danger”, wherein he watches coolly from Langley as the bomb cam zooms in on the drug lord’s monster truck, throws his feet up on a table and takes a bite of what looks like a baby carrot. “Boom”. Lunar landscape where a work of high scholarship by impartial experts used to be.

    While much of this criticism is, as indicated, not new, the part that is (in particular the degree to which Wegman et al was nothing more than a credible edifice for M&M’s disinformation, much as Judy Miller served as credible edifice for Dick Cheney’s Iraqi WMD disinformation), together with the other woeful scholarship and rank plagiarism that yourself and John Mashey have recently exposed means there’s nothing left of the Wegman Report to discredit.

    Its every fiber has now been shredded, stomped, disemboweled and otherwise torn asunder. Its authors have been disgraced, and their thorough disrespect for science, truth and their own integrity laid bare for all to see. And of course, the reflection on McIntyre is clear, hence why this whole line of inquiry has driven him to conniption.

    Speaking of which, this raises a few questions about ramifications for M&M’s ‘scholarly work’, such as it is. Seems to me that the methodological choices therein border on the fraudulent. What could be their justification for using the full autocorrelation structure of the climate proxies in their validation null proxies? The quality of MBH’s methodology was to be inferred by how strongly the signal it detected in the proxy data differed from the information-less base case. A priori M&M would’ve known their choice would tilt the field of play against MBH.

    Since they had no evidence to support their unconventional methodological choice, how is it they are permitted to get away with that? Especially when you consider their novel and presumably undisclosed technique for generating a ‘random sample’ of PCs? Something’s rotten in Denmark if that can be allowed to stand. Isn’t getting a retraction of M&M from the journal that printed the nonsense a more worthy aim than the Wegman report?

  11. milanovic:

    … Would it be difficult to actually plot 10 random hockeystick PC’s out of the 10000 generated, say the first 10. (or maybe in the sorted list #50,150,250 etc.) I am curious how much “hockey-stick” shape is in there, because I don’t have a clue how to interpret the HSI exactly.

    Gavin’s Pussycat:

    It might be an idea to add an AR1 (.4) plot too…

    Even a median HSI (about 1.6, I reckon) upward bending simulated PC1 would recognizably be a hockey stick. The point, though, is that the 1% solution is just one more choice that exaggerates the “hockeystick” effect.

    Now I clearly recognize that the “short-centered” PCA does promote “hockey stick” patterns to the first PC, and that this is not a proper methodology.

    But from the very start, the biasing effect was exaggerated by focusing only on PC1 in the data reduction step, instead of the MBH algorithm end-to-end.

    And even within this narrow focus, M&M’s choices greatly exaggerate the “hockey stick” effect, i.e. by omitting the MBH renormalization step and using an inappropriate noise model. The 1% cherrypick is just one more exaggeration.

    A completely factored analysis would not only show representative hockey sticks, but also assess the impact on HSI of renormalization (as in the Huybers comment) *and* other noise model choices (perhaps AR1 with parameter 0.2, 0.4 or 0.9).

    Maybe I’ll get around to doing that (and improve my R chops in the process), but I must admit even my patience for this nonsense is wearing thin.

  12. So, if I do 10,000 simulations with random data, and keep the top 100 with an upwards bend, what did I demonstrated ?

  13. Looks like the Wegman and M&M scandal is beginning to look like the Bre-X scandal. Not surprising since both were orchestrated by mining executives. Cherry picking is just as dishonest as salting mining samples.

  14. DC: thanks for this. Your work illustrates further the sloppiness, superficiality and partiality of Wegman’s review and you highlight, once more, McIntyre’s tendency to exaggerate his case. By approving of Wegman’s shoddy work, McIntyre has also lost any credibility that he might once, long ago, have claimed as an auditor.

    One query, and forgive me if I have misunderstood something, but your Excel plot of HS35 does not appear to be the same as the one in the top left-hand corner of Fig 4-4. Also, you wrote:
    Here is the dynamic chart in action, with #35 selected, followed by a close up of the matching first PC1 in Wegman et al Fig 4.4.
    but the close-up graphic seems to be missing.

    [DC: Fixed – thanks. I guess the dynamic chart was a little too dynamic when I took the original snapshot. ]

  15. PolyisTCOandbanned

    1. Bottom line is that there is a biasing effect of “short centering”. However, MM have consistently shot themselves in the ass by exaggerating the impact of this phenomenon for the PR impact.

    To me, this shows a desire for winning games (and silly Internet arguments ones), rather then for true understanding of phenomena. If instead they had clearly shown the phenomena and how it VARIES AS A FUNCTION of incoming noise, then they would have done something of interest. For example if short centering were used in some other problem, even a nonclimatic one where the red noise was more random walk, then what they showed would be significant!

    But this sort of curious and objective approach to a phemonena, to look at it, test it and quantify it, was not their approach. Instead, their approach from the start was driven to “gotcha moments”. Pity. Cause it would have been a fun little detective find (like the Graham-Cumming with the uncertainty formula error, or even McI’s Harry or Y2K (minor, interesting, fun corrections!)

    And maybe even mathematically interesting to model the phenomena (in case of short centering). But they weren’t really interested in getting arms around what was going on, WHETHER or NOT, quantifying showed the impact minor. This is so bizarre to me. As the natural impulse I have in engineering, science, business, military, etc. is always to try to get a handle on something by quantifiying! Heck a lot of times, I can quickly start to grasp something without knowing all the math or physics, but quantifying how big is it, how does it vary, what makes it vary. This is a way to think about new technical (or even business) problems quickly and to allow you to get inferences from other fiels where you don’t have a BS in them. Heck, even when doing undergrad physics or math or chemistry problems would always look at the end result for size, just as a sanity check for correctness. I just think that is a normal impulse for how a curious analyst approaches things he does not yet understand fully to get his arms around them! So when someone does not want to quantify. When he avoids quick comparisons, my hair stands on end. (and I don’t buy the time excuse either. We are talking very high gain, obvious quantifications, and instead the guy has time for naked cartoons and other shitzandgiggles (for yearz and yearz!)

    In a sense, I sympathize with your saying that mapping the full methology space to show how different noise models and other methodology choices (standard deviation dividing) mesh with the short-centering biasing would be a pain to show the whole landscape. I can even feel a little (but less!) sympathy to McI for not mapping the whole thing (although also still surprised at the lack of curiousity to map a landscape.) But that said, even if he did not want to map the whole space and just show that there was a code/logic error in MBH, do it WITHOUT exaggerating the flaw! Heck, even do it with a sort of deference to his “opponent” to not overexaggerate the flaw (if he lacked the time to really map the landscape and just wanted to pick on example). But instead he has clearly taken the one pretty clear “error” and consistently and in many ways (PC1 versus overall recon, Preisendorfer n versus two PCs, overmodeled red noise, top 100!, standard deviation dividing confounding, etblaf*ckingcetera) exaggerated the flaw. Sad, sad, sad!!!

    You know finding a logic/math error in a simulation is not unusual. I think every single time I have done a big (30+ sheets of Excel) financial model, there have been errors in it (and I’m not even the jock putting that stuff together, just get involved along the way). So, you should always find and fix errors. But making a big hubbub about something that does not change the answer is silly.

    And this is why you quantify. For instance, if you find that you double discounted in your IRR formula, that is a BIG DEAL. Could queer a deal, especially when presenting to non-numerate execs who fasten on IRR, versu NPV. (Of course the difference between an anemica IRR and a juicy NPV should be the clue that you messed up IRR…and that’s why you think and compare and contrast!) On the other hand, being a year off in the revenue projections (but not investments) for an SBU that is 3% of the target portfolio, is just NOT going to change the answer from any kind of “do we pull the trigger, make the investment” point of view. And, yeah, the errors might scare you. Might make you look for more, for other similar ones, have someone else (an independant analyst) check the code. But that’s fine. And heck, it’s normal. It’s life. It’s thinking. It’s ANALYSIS! I really can’t beleive that this habit of mind needs to be explained or defended. Espeically to out of field skeptics!

    2. I’m familiar with most of the long, tangled history here, but the “top 100″ is a great new gotcha. I had never heard of that. Kudos, little guy! Can you please quantify how much difference it makes on the results to take “top 100″ out of the 10,000 versus using some “random 100″?

    3. Thank you for showing the difference between a real AR0.2 and claimed WEgman AR0.9. I agree that the biasing artifact is highly senstive to the amount of that. Actually McI had shown some of that in his CA posts (not in the sort of objective non-argumentative manner I would prefer, but with a clear bias to arging larger AR is “true” at the same time, he showed differences, but STILL, he had shown somewhen how different amounts of random walk start to affect things. He had shown some of the “how red are your proxies” dependance. (maybe he didn’t take it all the way through an algorithm, but just showed series themselves). That was one of the things that really set me off, when I saw the WEgman “0.2”. Not only did it not match NRC, it didn’t even seem to match McI blathering! It was just too low!

    4. I corresponded with Witcher and Hosking back in the day (and they have no, no, NO interest in this p***ing match). The person who really put together the “full ACF function” was Witcher (or whatever his name was, too lazy to look, but it is in that one CA thread). McI should cite him, not Hosking.

    There was a substantial amount of work to go from the Hostkings paper to the Witcher code, and Hosking has definitely not checked it! Also, the 1981 paper is the more appropriate Hosking reference. Steve just gets way too cute with cites and I don’t think he reallly understands the basic idea of how cites are supposed to work, but just picks stuff to be cute (there are other examples showing him not grasping concept).

    P.s. cross-posted to AMAC in case you don’t like the earthy remarks (I actually self snipped a lot of them).

  16. Let me add excellent DC. The work that must have gone into the above is amazing. I hope this does not add to your workload much more, but would it make sense to plot some random samples as milanovic suggested above or are there scaling issues?

    I had a discussion with Steve McIntyre a couple of years ago on the scaling issue but I also asked about how eigenvalues fit into the topic, i.e. were the eigenvalues from the “noise” PCs smaller than the eigenvalues from the reconstruction. The answer was somewhat evasive.

    best,
    John

  17. Gavin's Pussycat

    > even my patience for this nonsense is wearing thin

    Human after all :-)

  18. Pingback: Anniversario » Ocasapiens - Blog - Repubblica.it


  19. I had a discussion with Steve McIntyre a couple of years ago on the scaling issue but I also asked about how eigenvalues fit into the topic, i.e. were the eigenvalues from the “noise” PCs smaller than the eigenvalues from the reconstruction. The answer was somewhat evasive.

    From M&M 2005:

    The loadings on the first eigenvalues were inflated by
    he MBH98 method. Without the transformation, the median
    fraction of explained variance of the PC1 was only 4.1%
    (99th percentile – 5.5%). Under the MBH98 transformation,
    the median fraction of explained variance from PC1 was
    13% (99th percentile –23%), often making the PC1 appear
    to be a “dominant” signal, even though the network is only
    noise.

    So the median eigenvalue for M&M’s “centered” leading PC’s is about ~0.04; the median eigenvalue for the “non-centered” leading PC’s is about ~0.13.

    Compare those with Mann’s “hockey-stick” centered/non-centered PCA leading eigenvalues (~.2 and ~.4, respectively).

    So even with M&M’s noise with autocorrelation length on the order of half of the time-series length, their eigenvalue spectrum was still much flatter than Mann’s. There’s no way that any competent analyst would have conflated the two cases.

  20. Nice job, DC – I’m impressed. I have always found it strange when McIntyre et al consistently make the leap of logic that a poor choice of methodology, such as Mann’s PCA, is equivalent to a deliberate deception, and the opposite of their findings being true.

    Kate

  21. PolyisTCOandbanned

    Deep: The biggest issue is Wegman not even knowing the nature of the problem (in terms of a feel for the complexity). Remember Jollife said it would take him a few weeks and code and acess to the principles to aske questions to really weed through MBH and MM to access impact and correctness of the different methodology choices. But Wegman, arguably did a worse job than even McIntyre in terms of looking at the algorithm. and he went after it with himself (probably not a top data analyst at this point in time) along wiht a poor student and then some other dude. He didn’t really even have a feel for what was involved or give it real care and attention. too much trumpeting of the Wegman CV, not enough hard core work. I can forgive a mistake, but not so easily forgive not even making a good effort.

  22. It’s sterling service DC, thanks for your efforts!

  23. As someone who only has a passing familiarity with PCA, can someone explain what the original proxy data actually looks like? I’m used to using PCA in an ecological context with a multivariate dataset where you might have all sorts of different types of data like various species abundances, and you plot PC1 vs PC2 to look at how different communities fall out. But in the case of proxy data, is each input variable one temperature series from one (or one set of) tree rings? And the PCA then tells you which proxies have the best coherence?

    Or does each proxy data set have multiple potential temperature predictors, and the PCA is a way of telling you which predictors explain the most temperature variation?

    • Rattus Norvegicus

      Actually in the case of the infamous Mann papers PCA was used more as a means of data reduction. Each series (a site chronology) consists of multiple trees from a single site (a collection of stands, trees from individual plots) which give a history for a particular area, usually a few km square.

      Because North America was so heavily represented in the proxy dataset, Mann choose to use PCA to reduce the number of series so as not to overweight NA in the resulting reconstruction. Because of the differing environmental influences at each site, treating the composite of all NA sites leads to multiple predictors. In the incorrect version of PCA Mann used this led to the promotion of the bristlecone pine series to PC1, when the correct version left them at PC4, explaining only about 5% (IIRC) of the explained variance.

      Your second hypothesis is very close to the mark. And someone, please correct me if I am, as is very likely, wrong.

    • Or does each proxy data set have multiple potential temperature predictors, and the PCA is a way of telling you which predictors explain the most temperature variation?

      The PCA only tells you which patterns explain most of the proxy variation.

      The calibration against temperatures happens at a later step.

  24. But in the case of proxy data, is each input variable one temperature series from one (or one set of) tree rings? And the PCA then tells you which proxies have the best coherence?

    This one is pretty much right.

    n.b. “coherence” has a particular meaning in statistics, but I’m assuming you meant it in the plain English sense.

    • luminous beauty

      The calibration against temperatures happens at a later step.

      Uh, no. Proxies were chosen a priori because they calibrated well against local or regional temperatures (and/or moisture) in previous studies. PCA was performed as the first step (after areal adjustment) on the gridded instrumental data, 1902 – 1995 and the individual proxy series from 1902 – 1980 were calibrated against the corresponding EOFs of the instrumental data matrix by singular value decomposition to determine retention of reconstructed PCs for each proxy series and then tested for robustness against the 1854 – 1902 validation period as well as a smaller subset of instrumental/historical EOFs going back to the 16th century. The movement of PC1 in the calibration period to PC4 in calibration + validation steps is likely due to the fact that it corresponds to the elevated trend in global temperatures being the most significant pattern in the 20th century greatly reduced by inclusion of earlier temporal variance that doesn’t have this positive trend.

    • LB,
      The PCA focused on by M&M is *not* the instrumental temperature processing (by definition that couldn’t be short-centered).

      Rather it’s in the dimensionality reduction pre-processing applied (in this case) to the 1400 North American tree-ring network. This generated leading PCs that represented the 70 proxy series. Those PCs were then used along with other (sparser) proxy series in calibration against the instrumental temperature PCs, as you say.

      My understanding is that this reduction was to prevent swamping of the other proxies by the denser tree-ring sub-network, and also to preclude overfitting.

    • I guess on rereading we’re all in violent agreement here. But I don’t see that pete was wrong in his characterization of calibration as a “later step” (i.e. subsequent to the tree-ring sub-network reduction).

  25. Dumb question. what do you get if you simply add all the 10K and all the 0.1 K

    • The HSI is bimodal — there’s roughly equal numbers of upside-down and rightside-up hockey stick series. “Simply” add them together and they should cancel out for the full 10K.

      A quick glance at the R code suggests the 100 hockeysticks stored in the equipment shed are the 1% with the most positive HSI (not largest in absolute value). So add up the 100 and probably a hockeystick with about 1/10th the relative amplitude in the zig-zags.

  26. I presume you don’t mean literally adding, but …

    Half the PC1s point down. But orientation is irrelevant really, so the answer is the average PC1 (or average of all the 10 K PC1s if all “flipped” the same way) would have a hockey stick index of about 1.6. That is, on average the calibration period mean is 1.6 stdev above (or below) the overall series mean. The top 100 average 1.96 HSI.

    So this is best thought of as one more exaggeration (~20%) on top of all the others. It would be interesting, though, to see the exaggerating effect of displaying the top 1% as opposed to a truly random selection in other processing scenarios (e.g. with Huybers style renormalization, and/or AR1 noise).

    And of course a truly random selection would point both up and down. But it’s so bothersome to have to explain why that is not relevant, and then, too, the visual effect would be lacking.

    • Well actually, being a simple old bunny Eli does mean simply adding. If nothing else this is the simplest (insert word meaning nonsense here) test on what anyone claims to be random data.

    • Well, then pete had the right answer, although I think the amplitude of the zig-zags of the hockey stick for the 100 added together (or let’s say averaged at each time t) would be more than one-tenth of the individual PC1s. I’ll report back :>)

    • Actually, on reflection *and* inspection, the squiggles are that small (or even smaller), although there is some centennial undulation as well. The HSI is 2.4, for what it’s worth. The 20th century portion is interesting, too, steep rise to 1930, curved top peaking at 1940, and then noticeable drop off to 1980. So a hockey stick blade, but with an extreme concave bend at the end (like the NRC chart actually).

    • The noise in each simulated PC1 will be autocorrelated; however there will be very little correlation amongst the separate realisations of the PC1. So averaging them should reduce the noise by square root N.

      Note that the HSI is relative to the total variance of the PC1 — averaging the realisations reduces the variance which increases the HSI. If you keep increasing N, you’ll eventually converge to a HSI of 2.9, no matter how large or small the AHS effect is. So taking the HSI of the averaged PC1 is probably meaningless.

    • Rattus Norvegicus

      DC, doesn’t what you say here sort of make the point? The M&M technique incorporated at least some of the signal. The trends you describe for the 20th century look an awful lot like the temperature record.

      Not very good for them in either their “trendless red noise” findings nor their finding of the much higher threshold for statistical significance in the RE measure. In the words of Horatio Algernon they’ve been “emathsculated”.

    • The curve at the top is a lot more symmetrical than the real record. Also, in the NRC version it arises from AR1(.9) noise which has very high auto-correlation, but no temp signal as such.

  27. “So this is best thought of as one more exaggeration (~20%) on top of all the others. ”

    This reminds me: it might help people to have a pair of nice short flow diagrams showing:

    a) “right” way to do it v

    b) the way MM/WR use, with some idea of “exaggeration” or distortion each step.

    For some reason, I think of a UNIX pipeline where each command distorts the calculations along the pipeline.

  28. I always like reading these articles kind of like carving a turkey, a lot of trimming around the edges but never getting to the meat of the matter. AGW has been thoroughly exagerated for quite some time now. I am looking forward to the cancun conference, no major world leaders in attendance, no agreement of any kind another fiasco in the making.

    Keep up the good work, meanwhile the rest of the world has moved on to more pressing matters like how to more effectively convert bitumen to Oil.

    [DC: Hmmmm, a content free and relevance free post, with ClimateDepot link back (removed of course). I doubt I will bother with this commenter in future. In the mean time, let’s not feed the trolls. ]

    • I doubt I will bother with this commenter in future.

      He doesn’t know how to carve a turkey, nor how to spell, so I don’t think we’ll be missing much regarding climate science …

  29. Rattus Norvegicus

    Ah something like:

    acf | hoskings.sim | sort | uniq | head -2 == 1000% distortion?

  30. I don’t fully understand this, but it looks as if we’re dealing with a case of the metaphorical “thumb on the scales”–which would be an instantiation of scholarship much worse than just “shoddy.”

  31. Great post, DC. Just when I thought nothing could possibly be added to the labyrinthine hockey-stick debate, you manage to pull together much of the argument in a very clear way, and bring something new to it, answering questions I wondered about when I first read MM. Bookmarked!

    I can see that in any case it’s odd of MM to compare one of their ‘sample’ PC1s with the full MBH reconstruction (MM fig. 1 / Wegmann fig. 4.1). But
    I’m afraid I’m still clueless enough about the methodology that I have to ask: what about their scaled-up hockey stick being an order of magnitude ‘smaller’ than the MBH curve (see the y axis)? Is this relevant to the argument at all?

  32. I’ve been trying to recreate the NCR figure, but without success. I’m a Matlab user and not really familiar with R, so their code isn’t all that helpful to me.

    I’ve created AR1(.9) noise for each series using the filter function in Matlab. As far as I can tell, this is an appropriate method (yes?):
    filter(1,[1 -.9],randn(Years,1))

    I then short-center each series over the calibration period using the MBH method described by Huybers (‘IT’ refers each series):
    (Series(:,IT)-mean(Series(Calib,IT)))./std(detrend(Series(Calib,IT)))

    I then run the PCA based off of the covariance matrix:
    pcacov(cov(NormSeries))

    PC scores for each year are then calculated by taking the sum of the products of normalized values and loadings. I’m sure there’s a prettier way, but I’m just running this within loops:
    scores(YR,PC) = -sum(NormSeries(YR,:).*coeff(:,PC)’)

    I can confirm that these scores match up with those calculated using the princomp function when data are normalized by just subtracting the mean.

    The only way that I can get anything resembling a hockeystick is if I manually insert a hockeystick into each series. The NRC AR1(.9) figure is so striking, but I still can’t figure out why simply “short-centering” would produce such an extreme effect from random red noise, even with high autocorrelation. For me, all short-centering does is make it so that the mean PC1 value over the calibration period are equal to zero.

    Granted, I did this really quickly (so there could be a stupid mistake), but am I missing something? I still don’t see why short-centering should create such a strong pattern that isn’t found in the original data.

    • I then run the PCA based off of the covariance matrix:
      pcacov(cov(NormSeries))

      I hesitate to call this a “mistake”, since using the built in methods is usually more sensible than rolling your own.

      The cov method is going to centre NormSeries properly, making your earlier short-centring irrelevant.

    • You’ll have to roll your own “short-centered” PCA, although I’m not sure how Matlab will do that (as I’ve never used Matlab).

      In the NRC code the “short-centered” standardization of each generated series is done, then the svd transformation is applied to the resulting matrix of “short-centered” series. (Does that help?)

      for (j in 1:p) {
      b <- arima.sim(model = list(ar = phi), n);
      a[ , j] <- b – mean(b[baseline]);
      }
      invisible(svd(a)$u[,1]);

    • I’m not a Matlab user either, but from your code it looks like you only need a roll-your-own version of cov.

      X* X / (N-1) should do it, where * is the matrix transpose.

    • Oh, I see. Matlab PcaCov() will work from supplied covariance matrix. So never mind – listen to pete (always a good idea).

    • I’m not a Matlab user either, but from your code it looks like you only need a roll-your-own version of cov.

      X* X / (N-1) should do it, where * is the matrix transpose.

      Thanks Pete – it turns out that the cov function in Matlab indeed normalizes the data. I’ve done it as above (which is the same code as within the function – after it normalizes it) and I indeed get a figure that matches up with the NRC.

      I ran 1000 iterations for AR1(.9) and AR1(.2) and calculated the HSI values.

      For AR1(.2), mean HSI is 0.82 with a std of 0.11. The max was 1.12.

      For AR1(.9), mean HSI is 1.94 with a std of 0.11. The max was 2.18

      So while 99% of MM’s pseudo proxies show HSI values above 1.0, it’s essentially the opposite with AR1(.2) noise.

      Thanks to DC for diving into this so thoroughly and making me curious to try it myself. I certainly learned a thing or two…

  33. So I would hope to see a correction on record for the GRL article?

  34. “The Chairman of the Committee on Energy and Commerce as well as the Chairman of the Subcommittee on Oversight and Investigations have been interested in an independent verification of the critiques of Mann et al….”

    “Because of the lack of full documentation of their data and computer code, we have not been able to reproduce their research. We did, however, successfully recapture similar results to those of MM. This recreation supports the critique of the MBH98 methods…” — from the Wegman Report

    Horatio has learned something useful: when it comes to statistics, at least, “independent verification” is just like clicking the Youtube “replay” button.

  35. Horatio:
    The comment about statistics fits only if you think the Wegman Report was an exemplar of real statistics.

    I certainly don’t think that, and in fact over the long term I suspect the people who will be most appalled by this mess are good statisticians.

    • Horatio’s comment was tongue in cheek.

    • Horatio: I thought it might, but I emphasize this because I think there is evidence (various places in SSWR) that the Wegman Report and later talks by Wegman and Said were doing what they could to foster a statisticians-vs-climate scientists fight. So, thus is one where if you mean tongue in cheek, it is helpful to say so.

      I don’t have a copy handy, but I labeled this the “faux fight” meme, which didn’t work very well, even with McShane and Wyner’s attempt to try again.

    • PolyisTCOandbanned

      I think the “meme fight” can be understood without any sort of coordination or conspiracy. It is common for opposing sides of debates to try to push an angle. And there is even some small validity to the angle. (Overplayed I beleive, but the way fields contribute to each other can lead to these gaps.) You can even go back to Hotelling and read his famous essay (go research it) on the role of a Department of Statistics. It is a deeper thread than just the recent climate issue. Has to do with conflicts of Deparmental Statistics versus non. Math versus Stats. Who should teach intro courses, yada yada. Seriously go read it and also get Rabett’s take. And I am not saying Hotelling is all right or all wrong (although he was a great man). Just read it and think about it, to have some feel for the landscape. Even if you change no opinions, it will give you some flavor for how WEgman sees things. Really man. Hunt down that essay. It is a total classic in the field of statistics. It’s in the big blue Tukey book. (I think.)

      The climatologists are not formally statistical enough was being pushed well before Wegman. I agree with it a tiny bit. But I think things will get better. And I like guys like Annan. Mike on the other hand is cocky. And bald. did I mention bald? I could take him in a fight too. :)

      [DC: Cut the macho stuff, please. Or I can do it for you. Thanks! ]

    • Poly:
      My “faux fight” meme came only after:

      a) studying Wegman’s various talks to statistics audiences light on climate expertise

      b) Looking at Wegman’s absurd talk to 2007 NCAR, about which I got feedback from 2 attendees, both rather negative.

      c) and exchanging numerous emails with serious statisticians, who were in fact dismayed at Wegman, in some cases expressed that to him, but in any case, were ignored.

      The axe-grinding went way beyond the usual inter-disciplinary bickering, including specifically the topic of who teaches statistics and how, something I’ve discussed over the years at various schools with professors, department heads and one university president, having given invited (simple) statistics-related lectures at 7 schools, most of which rank in Top 100 on most people’s lists.

      People are entitled to opinions, but minimally-informed ones don’t really add much or encourage one to read them. In case people wonder, even given the massive length of SSWR, much was known that was *not* written down, but none of it weakens what was written. Some is worse.

    • I would welcome a story about these exchanges with serious statisticians.

      It would complement well the stories so far told here.

    • PolyisTCOandbanned

      I’m not really disagreeing that it is faux or not. I just think you could have a better feel for the meme if you read Hotelling. Heck you can read it and say Hotelling was wrong. Or that Wegman is misapplying Hotelling. that’s really not my point (right or wrong, per se). I just want you to have a little deeper understanding of the development of the ideas, of the meme background. It’s like you could disagree with (or agree with!) invading Iraq, but a deeper understanding of the evolution of different “neoconservative” ideas historically, would improve your criticsism. Heck, you can even read Hotelling and then tell TCO, it was a dry hole.

      http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.bsmsp/1166219196

      http://www.stat.berkeley.edu/~terry/hotelling/HotellingLect1.pdf (see pages 23 on)

      P.s. Since I know y’all love these social engineering connections, realize that there is a UNC connection of Wegman and Hotelling. And again, even if Wegman is “wrong” understanding that he will have read and been deeply influenced by this “stats policy” essay of Hotelling will at least give you alittle insight into Wegman’s thought process.

    • PolyisTCOandbanned

      1. Here is a list of links with various discussion of Hotelling’s paper.

      http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177013002

      click for the different comments and then go to the bottom to get the PDF.

      2. Again, I’m not at all trying to defend Wegman (or Hotelling). Just to give you a tiny insight into “where he is coming from”.

      3. My (TCO personal view now) thinking is that you have to take Hotelling with a grain of salt. He is a chairman of a Statistics Department defending turf in a sense. One can imagine similar debates as to who should teach fluid dhynamics and heat transfer to chemical engineers (the ChemE profs or MechE?) and etc. There are plusses and minusess of each approach (and Hotelling has not really won his argument, if anything maybe he has sort of lost it). That said, stats is a very easy thing to mess up in terms of logic flaws. And people do all the time (just look at a lot of the medical literature). So along with the (good for the world) increased USE of statistics in applied fields has gone continued theoretical lapses. Mike in a way, with his (opinion) use of new methods without thorough (I mean very thorough and theoretical, not JoC) validation of them, sort of fitst into this pattern. It’s also interesting that Hotelling notes the importance of a statistician in having dug deep into at least one applied field (the issues of data collectiona dn all that, not just up high linear albegra theory). I think in this snese, you can see a lot of the gaps of Weggie and M&W (too cocky by far about not knowing the paleoclimate background and how it affects stats).

      Similarly, sort of fittiing into this concern…I was very bothered that Wegman did not FOLLOW UP his initial report with more work in paleoclimate. I mean, he;s got theoretical training, he’s been pulled into this pretty interesting field, and there are probably opportunities to make advances in it (it is “rich”). But there was really nothing progressing from his report. No deeper analyses, no retractions or amendments from grown understanding, no tangents into new problems, no bringing back of things he learned from this work to other (non climate) fields. The lack of such activity worries me that he is really not capable (or interested) in bringing it. heck, really McI (for all his warts and his lack of a Ph.D. in stats) has looked and thought about these concepts more than Weggie. And of course, the real advances are coming from Li and Huybers and Annan and the like. Heck, I would even put M&W (which I remain very critical of, both for results and for cockiness about ability to made advances in the field without really coming up to speed) in front of Wegman at this point.

  36. The Wegman report was described as “independent” and “peer-reviewed” by its Republican sponsors. I plan to do a post on these claims. For now, I’ll note that mechanical “reproduction” of figures produced by archived data using the original study’s R script hardly constitutes “independent verification” (a point made very eloquently by Horatio).

    • Ah, but then you do not have your auditor merit badge

    • “and “peer-reviewed” by its Republican sponsors.”

      Which, of course, Wegman agreed with at the hearing.

      MR. WHITFIELD.. […]and I can tell you right now that his document has
      been peer reviewed also, and we will get into that later.”
      […]
      MR. STUPAK. But we note that Dr. Wegman’s work is not yet published or
      peer reviewed so it is very difficult for us to evaluate his work. ”
      […]
      MR. STUPAK. Did anyone outside your social network peer review your report?
      DR. WEGMAN. Yes.
      […]
      MR. STUPAK. No, no, I am talking about general peer review. If you are going to have a peer review, don’t you usually do it before you finalize your report?
      DR. WEGMAN. Yes.
      MR. STUPAK. Well, your peer review was after you finalized it?
      DR. WEGMAN. No, it was before. We submitted this long before.

      Looking forward to your post, DC. It’s been one of the biggest memes about the Wegman Report.

    • J. Bowers: you can get a headstart about reviewers:
      See SSWR, specifically A.1.

      willard: I’m not going to say any more about conversations with statisticians. If you read SSWR carefully you can probably make informed guesses about a fraction of those I might have communicated with. I left out a lot, given:
      a) Issues that people were pretty peeved about, but thought were between them and Wegman …
      b) Things that people remember telling Wegman, but either verbally or via email they didn’t save. [That kind of stuff would come out in testimony, but I wasn’t going to use it.]
      c) Second-hand stuff like “X was really worried about Wegman, for reason Y.” (in fact, X indeed was worried about Y, and was delighted later to be sent a pointer to SSWR when it came out.)
      d) Opinions that certainly were consistent, but not really verifiable, and in some cases perhaps not printable.
      e) And of course, discussions that happened after SSWR came out.

      Unsurprisingly, there is a a fairly tight social network among senior statisticians. They talk to each other also.

      Wegman certainly knew the reviewers. But anyone who thinks the reviewers were all just Wegman buddies asked to bless the WR, really needs to read A.1 and rethink that.

  37. This long ago ClimateAudit thread (September 2006, two months after the Wegman Report) is interesting to read now.

    We have Steve and bender claiming – over and over – that Wegman replicated M&M for the case of AR1(0.2) noise instead of ARFIMA, a claim we now know is baseless. TCO questions whether this is really the case, since the Wegman results are so similar to M&M. I wonder when McIntyre finally realized that Wegman hadn’t used AR1, but merely thought M&M themselves had. The latest “Wegman used AR1″ claim from McIntyre is from 2008, unless someone can find a more recent one.

    But TCO also raised another question that I have wondered about.

    Looking at the definitions in waveslim, hoskings.sim asks for the “autocovariance sequence” or acvs. It is not clear to me that this is the same thing as the “acf” that R calculates. Not clear if it is the right argument to be passed over.

    Then:
    I’ve been in touch with 3 big guys in this area of work. Nothing to report yet. Just trying to figure out what Steve did. Exactly.

    If the acf is just a scaled version of the acvs, I suppose it would be valid input to the creation ARFIMA synthetic series.

    But, TCO, did you get a definitive answer on this?

    BTW, I think Hoskings 1984 is the right reference since that’s the one where he described the creation of synthetic ARFIMA series.

    • PolyisTCOandbanned

      Deep: I don’t have much more to report. Yes, I could have pursued it harder, but got really annoyed with the evasions and the NOT WANTING others to understand what was done. I ended up saying eff it. You guys are taking it further than I have.

      I would think they should reference Witcher or whatever his name is (not Hoskings). Hoskings did not develop the code even thoughWitcher named it for him and doesn’t really care mich about it or stand behind it. I guess if you wanted to go overboard, you could reference both, but really referencsing Witcher is the key ref, for me. If you are trying to see what was done, to replicate, etc. you really want to go to Witcher not toe Hoskings. Hoskings is actually too far back.

    • PolyisTCOandbanned

      Deepie:

      Just in case you didn’t know, there is (I think or I think Steve has said) some sort of a fork in the MM code. So that you can use it eithe in the AR1 method OR in the “full ACF method”. So talking about “what is in the code” is not even sufficient. You have to ask waht was done for each specific figure (or number refered to in text) within a paper or a blog post.

      Steve has exploited the existence of this fork rhetorically a lot (when challenged on full ACF, he mentions the AR1 existing, etc.). The bad thing though is tha he did no really report (in sort of the Huyber-Berger full factgorial method, “this is what you get from A, this is what you get from B”. If he had, that would have been interesting. He had some small allusions, but definitely more in a CYA (bury the info in an Enron footnote in the 10K) type manner. Not a clear layout. and definitely evading answers on defiintional questions. (You see the same pattern with not defining terms on the MMH fiasco paper.)

      Instead, he always tries to shift and evade disaggregation. Not sure how much of this is rhetorical dishonesty games (the guy actually likes Clinton’s word parsing games) and how much is just him being warped or stupid. Probably some self-satisfying combination of dishonesty and stupidity. I get tired from unraveling the snail though.

    • I’m aware of the fork. To test AR1 that’s what I’d use. I’d also have to correct the code to generate, store and display a new sample “hockey stick” sample each run. But I digress …

      I don’t see any evidence that M&M or Wegman et al or anyone else actually used the fork. But for sure M&M should have reported AR1 at different levels.

      As for Wegman et al, Fig 4-2 would have been very different with AR1(.2) (although Fig 4-4 was – mistakenly – hardcoded). And they kept saying that M&M used conventional “red noise”. If they had used the fork, wouldn’t they have explicitly said “M&M used ARFIMA noise, but we show the same effect with the AR1 option M&M provided”?

    • I’m pretty sure Wegman’s team must have modified McI’s code, if only to get a higher resolution of Fig 4-4.

      But even with method2="arima", McI’s code uses heterogeneous AR(1) coefficients. Does anyone know where 0.2 came from?

    • So that sounds like “empirical” AR1, based on fitting AR1 to each of the 70 real proxy series.

      Anyway, my best guess is that Wegman et al misread M&M. There is a reference in M&M to AR1(.2) as being used for benchmarking by paleoclimatologists. Perhaps the repeated misleading reference to red noise led Wegman et al to confuse the two methodologies.

    • PolyisTCOandbanned

      Good. Figured you had to know the fork, but just triple checking.

      I pretty much agree that it seems others were not really clear on the forking (e.g. Wegman) and I think McI’s confused presentation of things, rather than a clear “this does this, that does that” type of discussion is a lot the reason for it. So in a sense that helps explain others not seeing some of these issues. You really have to parse hard and do a lot of double-checking, to make sure not to get caught up in McI’s snares (and I’m sure he’ll cite some throway remakrs in EE05 or the like, but those are the CYA footnotes, not the clear layout).

      Only other place you might look for an issue is the Huybers comment. I remember asking him which fork he used and never got an answer (and did not pursue further). So there might be some possibility that Huyber’s used the AR1 (in other words supports McI’s point that the effect had been shown). Or that Hubers used a mistaken fork. (No accusation and Hubers is a crackerjack brain, just another string in the web, and another place to check for an issue.) And it would not change the fundamental “catches” of the Huybers comment. Just in the back of my head is another place for checking stuff.

      Also, moving past that ancient history, there is also the issue of INTERACTIONS of method choices (e.g. Huybers catch on the standrard deviaiton dividing interacting with the type of noise). You have mentioned this interaction possiblility before, so not an aha to you. (Kudos, btw.) In terms of thinking about it, I actually think some model of a full factorial is more the way to think of things than the Mashey linear summation. since there can be interactions, it’s not really linear anyhow. Plus it allows understanding the landscape independent of arguments about which choices are right/wrong. and allows picking different blends based on opinions (for instance the Huybers issue).

      In any case, looking at that whole landscape, and for most combinations in the full factorial will (TCO assertion) lead to a view that Mannian short-centering DOES BIAS PC1 and that McI DOES EXAGGERATE how much it biases! ;)

    • There is a reference in M&M to AR1(.2) as being used for benchmarking by paleoclimatologists. Perhaps the repeated misleading reference to red noise led Wegman et al to confuse the two methodologies.

      Thanks, that’s what I was looking for.

      So that sounds like “empirical” AR1, based on fitting AR1 to each of the 70 real proxy series.

      I’d put “empirical AR1″ in the same box as “persistent trendless red noise” — weird McI terminology that should be replaced with something more informative.

    • Touche. How about so-called “empirical AR1″? (I think the term actually comes from McIntyre acolytes McShane and Wyner; I don’t know what McIntyre calls it).

    • Poly…
      “Mashey linear summation” ???
      I think you have me confused with Rabetts, or someone.

  38. OK I looked it up:

    Thus, the ACVS differs from ACS in that the mean µ has been removed from the data sequence. In other words, ACVS is equivalent to its ACS if the mean of the data sequence is zero. For continuously sampled processes, the ACS and ACVS are known as the autocorrelation function (ACF) and autocovariance function (ACVF) respectively.

    So this will work fine, as long as each input series to acf() has mean 0. Or if the acf() call specifies return of the ACVF, not the ACF. If it doesn’t …? Pete, any thoughts?

    (BTW, it does look to me that acf() is called with the default type parameter i.e. type is “correlation”, not “covariance”. And the input proxies appear not to have zero mean. So this may be yet another issue).

    • I’d say that using hosking.sim is a mistake regardless of whether you use the acf or the acvf.

      I’m speculating here, but I suspect it makes no difference if you’re using correlation PCA, but some difference if you’re using covariance PCA.

      As you noted, you could add type="covariance" to McI’s code to test this.

    • I think I was right the first time. Most authorities give ACF/ACS as just the ACVS/ACVF scaled by the variance. This is certainly in line with R acf() and with the normal definitions of covariance and correlation. So this doesn’t appear to be an issue after all.

      That doesn’t legitimize the procedure, of course – just that it does appear to be implemented as intended.

      Pete, you say:

      I’d say that using hosking.sim is a mistake regardless of whether you use the acf or the acvf.

      Do you simply mean that this approach to generating “null” proxies is conceptually flawed and produces highly misleading results? Of course, I would agree with that. Or are there any other issues?

    • Do you simply mean that this approach to generating “null” proxies is conceptually flawed and produces highly misleading results?

      Yep.

    • PolyisTCOandbanned

      Well, McI was still a jerk for not answering the question.


  39. That’s “some” PC1, all right. It was carefully selected from the top 100 upward bending PC1s, a mere 1% of all the PC1s.

    That is what I have been wondering about. The size of the sample that Mann used to get his temperature records from, and the size of the sample M&M used to get their ‘hockey sticks’ from. If M&M have to trawl through a many more to get their samples, aren’t they misrepresenting Mann’s process?

  40. Pingback: Post your climate change humour here - Page 14 - TheEnvironmentSite.org Environment Forum

  41. Uh, I’m not very conversant in statistics nor in climatology, but the last part of the hockey stick curve (contemporary times) is the best known, the best proven, because it’s based on actual thermometers reading, not proxies.
    So, stating that randomly generated samples sometimes show upwards curves at the end, it utterly irrelevant to the subject.

    Maybe M&M wants to argue that the MWP bump, as shown thru proxies, is a statistical abrerration with no real-world significance ?

    Somehow, I don’t think that is Wegman/M&M intent.

  42. up high linear albegra theory?

    Weggie?

    Does anybody take this person seriously anymore?

  43. Horatio Algeranon

    Ritson’s complete comment, addressed to McIntyre at ClimateAudit.

    “Some of the code we used was developed by former and current students working at the Naval Surface Warfare Center in Dahlgren, Virginia and may not be disclosed without approval through the Navy’s public release process.”

    Maybe that’s what Wegman used to get the higher resolution in Fig 4.4: Top Secret Naval plotting code.

    Just imagine if it fell into the wrong hands…

  44. I’ve had a little bit of trouble following this. So Wegman essentially just used MM03/MM05 code for his figures in his work and then it was cited as independent verification of MM’s work?

    Another question. Besides the short-centered PCA, what other “serious” issues were there with Mann et al. 1998/1999 that turned out to be valid?

    In going through the code were there any very serious issues with MM?

    • PolyisTCOandbanned

      “what other”? “what other”?

      You’re asking for a re-summary of a lot of debate.

      MBH had several comments on it, a corrigendum, and a bunch of McI posts. MM for it’s sake had four comments (two published) and then additional (as in this blog’s comments) features found wrong with it. And of course, there remains a lot of disagreement on what is wrong and not wrong.

      And then we get into the whole issue of right/wrong being confounded with large/small (impact).

      You’re asking for a lot, dude. A lot of repititon and going back to square one. Are you trying to get Deepie to do your 750 word Curry essay for you?

    • Gavin's Pussycat

      Besides the short-centered PCA, what other “serious” issues were there with Mann et al. 1998/1999 that turned out to be valid?

      I don’t think there were any besides the Corrigendum. Of course there were lots in the peanut gallery debate, but nothing further scientific as I remember.

      And the short centering was, arguably, a “serious” issue only in the sense that it wasn’t clearly pointed out in the original paper. It had no real impact. So, you see I disagree with TCO.

    • Clear Science,

      Answering you would be easier if we had a tool like this for climatic exchanges:

      http://mathoverflow.net/

      Since we only have blogs, everything gets rinsed an repeated way too much.

  45. Regarding NSWC:
    1) David Marchette and Jeffrey Solka were previous students.

    2) John R. Rigsby III was a current student, in that he’d finished his MS in 2005, but was (I think) working on PhD (part-time, most likely).

    I’ve visited NSWC in the 1990s. At least then, its missions did not include paleoclimate work or using bad social network analysis to attack peer review in paleoclimate. I am becoming increasingly interested in learning if any Federal funds or facilities were used for NSWC employees to help out with Wegman Report, or if the work was done on spare time.

    One must wonder. As I noted in SSWR, there really wasn’t any useful new statistical analysis, but let’s look at the various figures to see what could have been generated by code needing release approval:
    Figs 4.1-4.4: DC has shown to have used McInytre’s code.
    Fig 4.5 is the distorted version of the infamous 1990 IPCC FAR.
    (Maybe NSWC has a graph digitization and distortion program?)
    Fig 4.6-4.7 show version of Fig 4.5 overlaid with noise.
    (Maybe that’s the NSWC code??)

    Figs 5.2-5.7 (social networks graphs) were produced, presumably by Rigsby, but (I think) using Pajek. (The blockmodels show up in the book, and Google images: pajek shows lot of charts of the same sort as in WR.)

    Fig 5.1 is the “allegiance plot”, which might be something Rigsby made up. I still have reservations about that, as serious SNA people were unfamiliar with it or though it was a reinvention of old techniques known otherwise. Solka was Rigsby’s coauthor on several papers, and on Said’s committtee.

    This is till unclear, but without further information:

    a) Either this NSWC story was just made up to avoid code release.

    b) OR, someone should be explaining why US taxpayers paid for NSWC employees to work on this… because if the work was done on their own time, they should not have been borrowing NSWC code to do it…

    c) Or there is some non-obvious explanation that makes it all fine..

    • PolyisTCOandbanned

      1. Well, how about Wegman turning over all the non-NWSC code. I bet most of their stuff was not using aything from NSWC. (doesn’t solve your issue, but maybe useful to break that out at least)

      2. Listening you talk about getting out the time clock on those guys, reminds me of the CAers whining about Gavin posting to RC during working hours using the NASA computer. This kinda stuff sounds so tickytack.

  46. Gavin's Pussycat

    Listening you talk about getting out the time clock on those guys, reminds me of the CAers whining about Gavin posting to RC during working hours using the NASA computer. This kinda stuff sounds so tickytack.

    Yes, and no. I too would hate to see scientists being gone after for clocking some activity under the wrong category… I have to keep track of my hours with 15 min accuracy, and it’s enough of a pain without the threat of your enemies going digging through the numbers.

    That being said, there is an important difference between these two cases. When Gavin writes on RC, he is doing work that he is supposed to do, that is part of his job description: science outreach. What Wegman c.s. were doing — allegedly at this point — is scientific fraud. As long as they do it with their own money, only a science misconduct charge can touch them. Federal money makes it a federal matter, with new instruments available. I don’t know about you, but I have no problem with fraudsters having the book thrown at them… the heaviest book on the shelf.

  47. Deepie

    Wow, there it is again, first ‘Weggie’ and now ‘Deepie’.

    SOP for the science denialism and creationist crowd.

  48. Horatio Algeranon

    “I would make the following distinction. The works of Mann et al. we discussed in the report were federally funded, peer-reviewed journal articles. Our report was review of those papers and was not federally funded. Our report called for disclosure of federally funded work. “

    — Wegman’s reply to Waxman, as related by Ritson (“rationale” for why material requested by Waxman had not been forthcoming)

    Remind Horatio again, who’s whining about the use of federal funds in this case?

    PS Horatio is not saying they were, but if Federal “funds” (which might include personnel/facilities/funds/time etc) were used in putting together the Wegman report, that would seem to raise an an issue that involves much more than just “whining about time clocks”: Wegman’s claim to Congress (Waxman) that “Our report …was not federally funded.”

  49. Horatio Algeranon

    It probably all depends on what the meaning of the word ‘funded’ is.

  50. Sorry, I forgot to remind people of the context.
    SSWR had a section about Pro Bono. Recall that Barton, Wegman made a big deal about this having been done on Wegman’s own nickel.
    I quoted them.

    So, either Rigsby was doing this work on his own, perhaps as part of ongoing relationship toward PHD. That is barely OK, although then his affiliation really should have been GMU, if the work was done in his student capacity.

    But they gave his NSWC affiliation (which looks better than saying GMU, same as with Reeves (Mitre, but working on PhD ) and Said (JHU, but really gone from there by time of report, I think.)
    I mentioned this as a minor issue in SSWR.

    BUT, if they want to claim Rigsby as NSWC and they claim they cannot release code because it needs Navy release, I think taxpayers are justified in asking how this effort contributes the missions of NSWC…

    I had no real problem with a part-time student doing some work as part of progress towards PHD. But this goes beyond that.

  51. Thank you for responding to my comments. I was wondering if either of you knew what the value was for the base period being used by Mann et al. 1998/1999. I believe the base period was 1902 to 1980 or something similar but is there any way to know what the 1902-1980 average was so as to be able to put a different base period and thereby compare with another reconstruction?

  52. It’s gettin’ better all the time…

    (hat tip to Joe Romm at CP)

    The irony couldn’t be greater …

    The Wegman report called for improved “sharing of research materials, data and results” from scientists. But in response to a request for materials related to the report, GMU said it “does not have access to the information.” Separately in that response, Wegman said his “email was downloaded to my notebook computer and was erased from the GMU mail server,” and he would not disclose any report communications or materials because the “work was done offsite,” aside from one meeting with Spencer.

    They “request” mentioned above appears to be a FOI request (as the article mentions FOI earlier).

  53. When one runs into these religious-like deniers, one is bemused. However when a large media propoganda effort becomes part of it, one starts to worry about Fascist-Goebbels like behavior in America. WTF is going on? The use of the blogosphere and the drive by posters on web sites, while seemingly unconnected, over time, again and again point to someone (or an organization) trying to manipulate the masses.

    [DC: I can understand your anger at the manipulation, but the comparison overreaches by far. The behaviour of Rupert Murdoch, the Republican Party, Big Oil and other assorted villains is reprehensible enough on its own. It does not require strained analogies to other eras. ]

  54. Regarding all this funding stuff around NWSC it is interesting and probably worth pursuing … but people really, really want to learn about the DHHS Office of research Integrity (ORI). ORI has serious teeth. They *debar* researchers, meaning that for N years, research gets *no* Federal research money, not just from DHHS. They never have debarred larger entities, but they actually have that power, which is why sensible universities do not mess with them. Read SSWR A.7: Funding, pro bono or not.

  55. Three things spring to mind.
    If Wegman erased stuff from the GMU server, does this mean he worked on the report in a GMU pffice with GMU equipment?
    Was there any sworn testimony before Congress?
    Who paid to print the report?

    • If Wegman used GMU’s email system, then the records are public.

    • 1. “If Wegman erased stuff from the GMU server, does this mean he worked on the report in a GMU pffice with GMU equipment?”

      But in response to a request for materials related to the report, GMU said it “does not have access to the information.” Separately in that response, Wegman said his “email was downloaded to my notebook computer and was erased from the GMU mail server,” and he would not disclose any report communications or materials because the “work was done offsite,” aside from one meeting with Spencer.

      So it appears that GMU does not have the emails. And Wegman does, but refuses to release them. Perhaps Wegman’s FOIA obligations would turn on whether the emails were sent or received using his GMU address and/or passed through GMU servers (e.g. via forwarding from another address). His response appears to indicate that this might be the case, so there may be grounds to appeal GMU’s response and require GMU to force Wegman to release any relevant emails.

      2. “Was there any sworn testimony before Congress?”

      Yes, there was. In that testimony, Wegman appears to support Republican claims that his report was “peer-reviewed”. That claim was elaborated in a subsequent written response to Rep. Bart Stupak, although it is not clear that this testimony is considered sworn testimony for Congress.

      Also, the report itself claimed that Wegman et al had performed “independent verification” of the M&M critique.

      3. “Who paid to print the report?”

      Presumably that was done by the House E&C Committee under their budget.

  56. I’ve added the following update:

    According to USA Today’s Dan Vergano, a number of plagiarism experts have confirmed that the Wegman Report was “partly based on material copied from textbooks, Wikipedia and the writings of one of the scientists criticized in the report”. The allegations, first raised by myself in a series of posts starting in December 2009 (notably here and here), are now being formally addressed by statistician Edward Wegman’s employer, George Mason University.

    I’ll have a more substantive post on this very important (and welcome) coverage in the mainstream media soon.

    • I’ll have a more substantive post on this very important (and welcome) coverage in the mainstream media soon.

      Ponder the fact that USA Today has scooped the NY Times, Washington Post, and LA Times on this story by actually going out and getting independent experts on university policy matters to take a look and evaluate the claims.

      USA Today …

    • Gavin's Pussycat

      dhogaza, I’m not “into” the US press, but I had the impression that USA Today was a tabloid. Free distribution in hotel rooms and such. Was I wrong?

    • USA Today is a tabloid, but not of the right-wing, “yellow journalism” variety. It’s generally short on analysis and depth, and long on snappy graphics. But its journalistic standards are nevertheless quite high, as far as I can tell. Certainly Dan Vergano is top-notch science reporter, a cut above, say, Fred Pearce.

    • Yes, their journalistic standards are high, but they’re considered lightweight and somewhat superficial in regard to their coverage of events. They don’t appear to devote anything like the resources of a NY Times or WaPo to stories.

      The Times might assign a reporter to a story for months and the resulting coverage will be highly detailed and strongly supported by the research done by the journalist.

      USA Today … um, no.

      So if I were science editor of the NY Times, I’d be embarrassed that USA Today has done the gumshoe work to not only cover the Wegman story, but to research it and back it up with expert commentary.

      I mean, people might wonder if the NY Times doesn’t think that the story’s not … important … especially given the fact that the Times has given ample coverage to stories like Climategate etc. Maybe it just doesn’t quite fit the narrative …

  57. Horatio Algeranon

    Ponder the fact that USA Today has scooped the NY Times,

    …and that two bloggers (John Mashey and Deep Climate) have scooped all the worldwide “news” media combined.

    Part of the reason may be that “Plagiarism experts” are not exactly the BFF’s of the The NY Times and other news organizations (“If the flies ain’t around, why attract them with dead meat?”)

    And look on the bright side: it wasn’t the National Inquirer.

    • …and that two bloggers (John Mashey and Deep Climate) have scooped all the worldwide “news” media combined.

      Good point. It actually fits with what I’m saying, in that picking up on the work of others and pushing it a bit further is actually a very good job by USA Today standards – investigative reporting isn’t their forte (to put it mildly).

      Yet the NY Times and WaPos of the world would like us to believe that such reporting *is* their forte … yet … they’ve really done little (if anything) to expose as false claims that there are two legitimate scholarly sides to the “climate debate”, and at times add credibility to claims that indeed perhaps the climate science side is the side playing games (climategate coverage, etc) rather than the McI’s, Wegman’s, and RP [J/S]rs of the world.

  58. Vergano is very, very good, knowledgable, well respected by top scientists, digs really hard, calls people and checks, and if he goofs, fixes them … But of course, he is a Penn State grad, like I am…

    FWIW, USA Today’s circulation is just below the WSJ (not an easily gettable venue)and more than 2X that of 3rd newspaper, the NY Times.

  59. Update: Wegman responds:

    ” ‘I will say that there is a lot of speculation and conspiracy theory in John Mashey’s analysis which is simply not true,’ Wegman said. “

    • How much is “a lot”?

      “He said the committee “wanted our opinion as to the correctness of the mathematics” used in two climate studies.”

      Right, but what’s the SNA stuff doing in WR then?

      “They wanted the truth as we saw it,” Wegman said.”

      So when his view is colored then the truth as he sees it will too, which is nicely displayed in DC’s and John Mashey’s work…

  60. I’m thinking the next post should be a roundaup of both the plagiarism issues and the factual issues. The plagiarism has gotten started first, and the response of your average denialist will be “so what, that just means they couldn’t get him by proving him wrong”, which feeling will have some weight with a large section of the populace. How can the media be persuaded to spend some time thinking about the factual issues in the report?

    • Good points Guthrie. As we know, thanks to the outstanding work of DC and Mashey, the problems with WR go way beyond the plagiarism, and extend down to the very core of their work.

      Someone should really be following up with Vergano et al. on the factual issues.

      It was a stroke of genius to solicit the ruling of three independent experts on the plagiarism. Kudos to the person/s who came up with that. As a result GMU are now in a very tight corner.

  61. Remember folks — this is the same Wegman who said at the Congressional hearings,

    “Carbon dioxide is heavier than air. I don’t know where it sits with respect to the atmospheric profile.”

    Now what was all that fuss about regarding climate-scientists failing to consult statisticians for their expertise???

  62. Pingback: A test for establishment climate journalists | The Way Things Break

  63. Wegman has responded. This is after he had stated:

    “I’m very well aware of the report, but I have been asked by the university not to comment until all the issues have been settled,” Wegman says, by phone. “Some litigation is underway.”

    DC, any indication that anything’s been settled?

    The wording in the response is interesting.

    “We are not the bad guys. … We have never intended that our Congressional testimony was intended to take intellectual credit” for other scholars’ work.”

    I’m not sure that was his intention either. It was worse than that. But here he isn’t flat-out denying that he lifted text without citation, or distorted the meaning. He’s just implying that if he did, he didn’t mean to.

    “Wegman said he and his report co-authors felt “some pressure” from a House committee to complete the report “faster than we might like.” But he denied that there was any attempt to tilt the influential climate report politically.”

    This is sort of consistent with “we were sloppy” defense. The second claim is laughable.

    I also think that if he uses the “no bad intention” defense, the stuff involving distorting the meaning of text copied should be brought out.

    http://www.usatoday.com/weather/climate/globalwarming/2010-11-22-plagiarism_N.htm

  64. “Wegman said he and his report co-authors felt “some pressure” from a House committee to complete the report “faster than we might like.” But he denied that there was any attempt to tilt the influential climate report politically.”

    What the hell would that pressure amount to other than being an attempt to push it in a politically favorable direction?

    Hell, that’s stupid, sorry … what did Wegman think he was hired for in the first place other than to write a politically-tilted report?

    Well … what he agreed to do, and what he agrees he did, are different things, obviously :)

  65. re: distorting the meaning
    I’ve got a short piece in progress on falsification and fabrication, but it’s not quite ready. I’d be especially happy to get feedback from academic experts on the draft. For those into color (or colour), it adds red and even a little green to the existing cyan/yellow scheme. Lobby Microsoft for better choices of highlight colors, or even better, user ability to pick a palette, even if still limited to 15 choices.

    Note: in Vergano”s latest, note the quote of ORI’s John Dahlberg.
    HINT: important.

  66. “But he denied that there was any attempt to tilt the influential climate report politically.”

    Wegman uses words carefully, as you can see in his testimony where he carefully never admitted to AGW…

    See SSWR:
    p.26: right column.
    “Is it plausible that Barton and Whitfield would have gone forward with this effort unless they were absolutely sure the WR would produce the “right” answers?”

    Wegman was contacted through Jerry Coffey, and I expect neither will say a word about that unless forced to by subpoena. But see [COF2009] on p.38. Coffey has quote clear views, likes Solomon, Michaels, Singer.

    Now, I must conjecture, but given what the Wegman Report became
    a) Coffey talked to Wegman and found him willing.
    b) Spencer could say Barton wanted the truth, and Wegman say they’d tell the truth as they saw at, and be perfectly correct … with Wegman and Coffey having agreed on the truth beforehand. This is one of the reasons we’d love to hear from the unknown 4th person…

    But one needs serious suspension of disbelief to think Barton would take a chance on getting the wrong answer, or on pushing somebody who would blow the whistle.

    • Maybe it’s because of too much listening to old-time radio detective shows, but I can’t shake the feeling that Wegman is the patsy in this. Why the rush?

      Summer 2005 -> Barton letter to Mann, et al., Boehlert requests NRC “hockey stick” review.
      June 2006 -> NRC Report released
      July 2006 -> Wegman Report released
      July 2006 -> Hearings (Subcommittee on Oversight and Investigations, House Committee on Energy and Commerce)

      Having the Wegman Report to “complement” the NRC Report was very convenient to some. A report that bolstered the NRC Report would have been of little use to these same people.

  67. Deech56:
    not quite right, but I htink yoru general uidea is right.

    June/July 2005: Boehlert tells Barton/Whitfield the right way isn’t to harass scientists, but to ask for NRC report.
    Jul 2005: NAS President Cicerone writes to Barton offering one.
    Barton turns it down as not adequate to his needs (or words to that effect).
    Sept 1: Jerry Coffey contacts Wegman

    October/November: *Boehlert* asks for one, Cicerone calls Gerald North.
    He doesn’t recall the exact date (I asked). I haven’t been able to find out, but I conjecture that word got back to Boehlert about Wegman.

    • Thanks, John, for the details and corrections. I was kind of doing some quick searches to educate myself on the context of the Wegman Report – and why the authors seemed “rushed”.

      Boehlert was absolutely right to entrust the NAS with the task, since their job is to provide independent and authoritative scientific analyses. For me, the plagiarism is the handle, but the interesting story really is in the chain of events and the way the analysis was done. I am hoping that these stories get out into the open as well.

      Time to go back and read Mashey, 2010 in a little more detail, methinks.

  68. Even though Spencer apparently first met with Wegman and Said in September 2005, nothing much happened until much later.

    Another key date is March 2006, when Boehlert’s House Science Committee held hearings related to the upcoming NRC Report. At that point Barton and Spencer probably realized that their head start had suddenly vanished.

    The first Wegman et al draft was only sent out for review in June 2006. And at least one section (2.2 on noise models and PCA) appears to have been written in mid-April 2006 or later, since one of its Wikipedia antecedents did not exist before then. It’s a reasonable inference that most, if not all, the writing and cut-and-pasting (or as Wegman puts it “developing”) was done after the academic year was over.

  69. The social network analysis on Mann and his co-author network t strips Wegman & Co. of any claims of innocence.

    • Ah, but Wegman is small potatoes. Wegman didn’t just arrive on the scene; he was hired for a task. As John and DC note, there are unanswered questions regarding the involvement of key people in the analysis and drafting of the report.

      The process must run its course, not to go after Wegman, but to assure scientific integrity. Looking to the future, this serves notice that the “auditors” will be held to high standards. This is especially important as the Republicans take control of key House committees. The proliferation of climate science blogs will only help in this. People like DC, Tamino and John (and hopefully the Rapid Response Team) have helped to augment the efforts of sites like RC.

  70. Here’s an interesting exchange between McI and a critical commenter, Nick Stokes, at the “Air Vent,” wherein McI responds (I use the term advisedly in McI’s case) to the Vergano articles at USAToday and DC’s and John Mashey’s analyses: McIntyre blames bristlecones and accuses others of plagiarism, et seq. McI appears to insist that the entire foundation of scientific evidence of anthropogenic climate change rests on a single sequence of bristlecone proxies. One could infer that McI would have us believe that without this set of proxies all arguments for limits on carbon emissions are misguided.

    With few exceptions, such as the part of McI’s response that discusses PCA (as contrasted with the part of his commentary that repeats plagiarism charges against Wahl and Ammann), the focus of discussion among the skeptics at Climate Audit, the Air Vent, and elsewhere is almost entirely on the plagiarism allegations against Wegman, rather than the substantive deficiencies in the statistical analysis in Wegman’s report. Mr. Vergano’s and USAToday’s critical attention to Wegman’s report and GMU’s investigation is welcome, but the focus on plagiarism continues to overshadow the main deficiencies in Wegman’s analysis, which was neither independent nor unbiased, or even done properly, as DC and John Mashey have shown. It’s a shambles throughout, but they prefer to focus only on the plagiarism because they think it gives them the most wiggle room to claim politics are to blame.

    Also interesting is how readily Prof. Wegman <a href="http://content.usatoday.com/communities/sciencefair/post/2010/11/wegman-report-round-up/1"<tosses Yasmin Said under the bus while denying any improper influence or pressure from congressman Barton’s staff. That takes chutzpah.

    • Interesting. McI says.

      “On any empirical point, Wahl and Ammann got identical results to ours. They confirmed our results, but spun them differently.”

      But all Wahl and Amman said was that the PC centering mistake didn’t really make any difference to the result. McI has just admitted his whole ‘hockey stick’ storm is just a storm in a teacup.

      I will wait for his apology at CA.

    • snide “I will wait for his apology.”

      And in the meantime we can collaborate on a rewrite of “Waiting for Godot” I suppose.

  71. Yes, this is why inquiring minds want to know when Scott, Rigsby, Reeves, and Sharabati got involved. (Sharabati because of the response to Stupak, which I don’t think was generated in a week or two.) I speculate that they were working on sections 4 and 5 earlier, but had to throw a lot if other stuff in to bulk it up. I’ve noted how disconnected the summaries and bibliography were from those sections and their key messages. So, hard to tell if the former were done earlier and ignored, if just thrown together later as part of the facade.

    I’d especially love to know when Scott was asked to write Appendix A.

  72. Pingback: Wegman responds to USA Today « Wott's Up With That?

  73. I just engaged in the completely pointless exercise of, over on WUWT, trying to point out that Wegman stated in figure 4.4 that M&M’s red noise was generated using AR1(.2), when in fact it can be conclusively proven (as above by DC in the last diagramme and the accompanying text), that it could not possibly have been AR1(.2).

    Lesson learned by this futile exercise: a denier will not ever, ever, concede even *one single point*, no matter how self-evident that point may be. Because if they do that, they open the door for the whole house of cards they have built up to come crashing down on them.

    Shorter lesson: I will never, ever post on WUWT again. They will claim a brief victory, and maintain that I ran away with my tail between my legs. But Wegman will (eventually) be discredited. Let’s just hope that it’s the non-existent due diligence on the statistics that is his downfall. Although, we know the deniers will blame it on the plagiarism and claim that the statistics were valid. And so it goes, ad infinitum…

    • Thanks for trying Steve. I had a similar experience over at Judith Curry’s blog a while ago. Only thing I had to show for it in the end was harassment by one of her fans and a not so veiled threat. Nice huh?

      These exchanges though with those in denial,while they may seen futile, do not go unnoticed, it will all make excellent material for a very sad book one day (soon).

    • But at least you weren’t called an anonymous coward (AFAIK; I haven’t looked) ;-).

  74. There is one graph in Wegman’s report that is intriguing, and which I haven’t seen any comment on is fig. 4.2, a histogram of hockey stick indexes, on page 30.

    http://www.uoguelph.ca/~rmckitri/research/WegmanReport.pdf

    What it seems to show is that the distribution of PC1 hockey stick indices from all of the red noise proxy runs either lies between -1 and -2 for negative pointing hockey sticks, or 1 and 2 for positive pointing hockey sticks. There are no PC1’s that have HSI’s between -1 and +1.n They claim that this histogram is derived from all 10,000 sets of red noise proxies.
    The centered PC1’s don’t show this behavior.

    Although the “samples” that wegman’s report show seem to be derived from the 100 most positive values of HSI, based on M&M’s code that DC showed in his excellent post, it seems from the histogram, that there is a indeed a strong tendency for a hockey stick shape to appear in PC1 with the non centered principal components procedure. This may be why the NAS credited Wegman and M&M’s statements with some validity.

    What is missing in all this is whether the higher order PC’s that would have been derived from the red noise proxies , and included in the full PCA, would have eliminated the hockey stick shape in the full representation of the data, using the correct number of PC’s. Has anyone looked into this

    It seems that including the correct number of PC’s , for the actual data of MBH, gives the same results for the correct optimal PC representation , whether centered or non centered PCA is used. This would seem to be the main problem with M&M criticism of MBH and Wegman’s report.

    • Yes, the number of PCs to retain is a very pertinent question, as I noted in the first part of the post. How to get at this in a “null” proxy benchmark test is less obvious.

      I would say looking at the PC1 eigenvalue and its explained variance, and the number of PCs required for a given minimal amount of cumulative explained variance (say 40%) would be very telling. I am designing a more comprehensive set of tests to get at this and other issues. Stay tuned.

    • PolyisTCOandbanned

      I think McI claims that he did the AR1 and described it within EnE article. (I don’t think this excuses how he wrote the GRL article. Just know from long experiences where he will crab to, next.)

      Bottom line for Mike is the off-centering does affect PC1. It’s not a good procedure. Joliffe does not endorse it and made a remark about what the heck are you maximizing “not variance, but some sort of combination of variance and mean”. Mike’s truculence to admit the procedure (and failure even to document it in the methods, and remember initially he refusd to share the code, too) were errors shows badly on him.

      Bottom line for McI is that he has consistently EXAGGERATED the impact of the off-centering. He makes every choice he can to make it look bigger and does not discuss how those choices drive extent. And he mangles what factors (other than offcenter) within the overall algorithm drive results. Basically, he had a nice catch, but he has tried to make it show that it does more than it does. And he has been evasive about disaggregation of issues, and he has consistently tried to redirect issues of his mistakes to discussion of the short centering occcuring at all, to bcp kvetching etc.

    • TCO,
      Sorry, I think you are once again asserting a false equivalence between Mann and McIntyre’s behaviour, or their contributions for that matter. Would it kill you to admit just once that Mann had ample reason to distrust McIntyre and not be co-ooperative, instead of constantly casting aspersions on Mann’s character?

      And face it, despite whatever flaws Mann’s methodology may have had (and we now know their impact was minimal), his work was groundbreaking and practically invented large-scale multi-proxy quantitative reconstructions. If you haven’t already, go and read what Gerald North and Peter Bloomfield had to say on the subject (I know you can find those on your own).

      Compare that to the contribution of McIntyre and the statisticians like Wegman or McShane and Wyner who seem to wear their ignorance of paleoclimatology, or scientific methods for that matter, as a badge of honour. One is tempted to say their contribution is minimal, except that would be overstating the case. Enough said.

    • PolyisTCOandbanned

      [Delayed in moderation – reposted.]

      I knew this would stir you up.

      1. I’m not saying the two men are equal. I’m intrigued in the nature of the specific flaw. And I do see some of the same behavior from each as far as not admitting wrong. That takes some particular sort of person to do so…and a stubbroness not to admit wrong, esocially to an opponent is a very common thing. You see it on debates all the time.

      2. I have said before that at least Mann tried something interesting and innovative and wrote it up. Seriously, I have. Please don’t make me repeat things. Just carry a NEA in your head and remember stuff, I do. Seriously, that is one of the things that becomes tiresome is having to repeat stuff so often. I don’tarchive my comments. It really bugs me, that you don’t remember me already making this point!

      3. I am FULL CAPABLE of living in a universe where someone makes an interesting attempt (the Nature paper) AND has personal characteristics failings as Mann (and even some of his buddies make little comments about him). Seriously, I can comprehend the two things in one person. It fits with muy personal experience in life, Deep.

    • So according to you, MBH98/99 was an “interesting attempt”. I’ll make sure to remember that ringing recognition of Mann’s contribution.

      Your continued refusal to address the real issues is getting tiresome. M&M and Wegman et al were part of politically motivated attacks on science and scientists. Their actual scientific contribution is negligible and hugely counter-productive.

      But I’m sure if you and your work were under prolonged, viscious attack by powerful interests, you would react much more appropriately than Mann. Give me a break.

      You know what I think? The scientists have been way too restrained in fighting back. It’s unconscionable that anyone is treating Wegman et al (or M&M for that matter) as a valid, good faith discussion of climate science.

      And that’s the last word on this topic. If you have something to say on the actual science, or if you want to defend the indefensible actions of “your side”, go right ahead. But no more rambling incoherent speculations on supposed character flaws.

  75. This is a little odd. I left the following comment on CA and Something similar at WUWT. After two days it is still in moderation on CA (over the same time Steve has posted twice), while on WUWT it was deleted immediately…..

    Fred says:
    Your comment is awaiting moderation.
    Nov 24, 2010 at 3:25 PM
    Doesn’t it bother you in the slightest that for Wegman’s ‘replication’ of MM05, they simply plotted a sample of the random PCs that you had archived?And then mislabelled the caption (fig 4.4) to indicate that it was drawn from an AR(1) distribution, when it in fact was drawn from the ‘full’ acf case? That they gave the highly misleading impression that they had actually done something with your calculation?

    Plotting a picture of someone else’s archived result, using the original code doesn’t sound much like auditing ….

  76. Eric, hi,

    It could be because M&M overcooked the red noise with ARFIMA, instead of using AR1(.2) like most statisticians would for this type of data. But I’m admittedly a bit out of my depth here. McIntyre is on record on CA as saying:

    The ARFIMA noise produced pretty hockey sticks but introduced a secondary complication and replications have focused on AR1 examples. To set parameters for the simulation, we calculated AR1 coefficients on the North American AD1400 tree ring network using a simple application of the arima function in R: arima.coef = arima(x,order=c(1,0,0))

    But if that’s the case, then why does McIntyre’s archived code show that he used ARFIMA (method2<-"arfima")? Did he leave that in there by 'mistake' after a trial run, or were the 100 saved-off hockey sticks actually generated with ARFIMA red noise instead of AR1 as he alludes to above? The NRC had to bump the AR1 parameter all the way up to 0.9 before they could replicate McIntyre's hockey sticks.

    Perhaps yourself or DC could answer a question for me: in the following code, where is the 'parameter' used by AR1:

    if (method2=="arima") {b<-array (rep(NA,nyear*n), dim=c(nyear,n) )
    for (k in 1:n) {b[,k]<-arima.sim(list(order = c(1,0,0), ar = Data1[k]), n = nyear)}
    }

    Looks to me to be ‘ar = Data1[k]’, in which case the parameter was calculated thus, on line 51 of the archived code:

    #CALCULATE ATTRIBUTES OF NETWORK FOR USE IN RED NOISE
    if (method2=="arima") {Data1<-apply( tree,2,arima.data)}

    So that the AR1 parameter would be different for each ‘k’. Would be interesting if it was always around 0.9, wouldn’t it? But then I could be way off the mark. I’m a very experienced programmer but this is my first exposure to R.

    But in any case, is that the same as, in McIntyre’s own words:

    arima.coef = arima(x,order=c(1,0,0))

    where I presume ‘x’ is the “coefficient”? It’s nearly impossible to figure out what McIntyre actually *did* because he seems to keep changing his tune :-\ We may never know exactly how the red noise was generated that produced those hockey sticks.

    Also, like you say, in his inexorable quest to remove any hockey stick shape whatsoever from the historical paleoclimate record, McIntyre insisted on using only PC1 – even though MBH98’s short-centred PCA methodology required that all significant principle components were captured.

    • McIntyre is referring to subsequent blog analysis. In the actual article, it was ARFIMA based on the full autocorrelation function of each proxy as I described.

      His ARIMA option derives an AR1 model for each proxy. The assumption once again is that the AR1 parameter for the noise absent climatic signal, will be the same as for the proxy as such. But this assumption has not been justified in any way.

    • Thanks for the affirmation, DC. The fact that we can conclusively prove that it wasn’t AR1(.2) that McIntyre used for M&M05 (as Wegman boldly asserted in his report) is, I’m afraid, too subtle a point to push on the likes of the WUWT crowd. For one thing, it just sails over most of the readers’ heads. And if if they did ‘get it’, they would deny it anyway as it conflicts with their ideology.

    • arima is used to estimate ARIMA coefficients, whereas arima.sim is used to generate simulated ARIMA series.

      arima.coef <- arima(x, order(1,0,0))$coef['ar1']

      will estimate the ar1 coefficient (assuming an AR(1) model). (nb: without the $coef['ar1'] part, you’ll get an entire arima object, not just the coefficient.)

      This is reasonable for null proxies — you’re making the null hypothesis that there is no signal in the data. It’s becomes inappropriate when you make pseudoproxies, in which case you want and estimate for just the noise.


  77. What is missing in all this is whether the higher order PC’s that would have been derived from the red noise proxies , and included in the full PCA, would have eliminated the hockey stick shape in the full representation of the data, using the correct number of PC’s. Has anyone looked into this

    Actually, that’s the first thing that any competent analyst using PCA as a data-reduction technique will do. You always need to determine how many PC’s to retain, and you do that by looking at the eigenvalue magnitudes. If you retain too few, your data-reduction step will be “too lossy”. M&M seemed to overlook this point.

    As far as hockey-sticks go, it all boils down to “The bigger the leading eigenvalue, the bigger the hockey stick”. The “short-centered” leading eigenvalue (EV) magnitude for Mann’s tree-ring data is much larger than the corresponding EV magnitudes produced in M&M’s “red noise” runs.

    Regarding the “Hockey Stick Index” metric touted by M&M, that metric tells you only about the “shape” of the “hockey stick” PC; it tells you nothing about the hockey-stick “size”. You need to look at the leading EV for that.

    So the bottom line is, Mann’s is bigger. Much bigger. (Hockey stick, that is.)

  78. As for dealing with WUWT, for an example if the highest level Of Dunning-Kruger, see attack on Ray Bradley .

    Ray actually posts, the replies might be worth study by those who research abnormal psychology.
    Search for bayonet for example.

    • Best quote: “Can you believe Mr. Bradley implied that we don’t understand the subject matter?”

    • For lovers of irony, there is this comment.

      “David Ball says:
      November 24, 2010 at 8:06 pm

      Can you believe Mr. Bradley implied that we don’t understand the subject matter?”

    • Deech and snide, you beat me to it :-\

      What really irks me about the WUWT ‘readers’ is just that. They will not read *anything* about the science. They just sit there in their own little echo chamber, insulting scientists who have worked their entire careers to document what is happening to us climate-wise, mostly because they *care*.

      The majority of posts at WUWT cause me to mutter FU under my breath after the very first sentence and just skip on to the next inane post. But what really takes the biscuit is the fact that the armchair scientists there all have their own little pet theories, and almost *all contradict each other*! Yet they never take issue with each other’s posts, but hop all over anyone that comes in with a rational viewpoint.

      So I will continue reading it occasionally for the amusement (and the horror), but have made a promise to self to never post there again. Gotta watch the blood pressure at my age…

    • This one is pretty awesome as well:

      Ray Bradley said………….”Your readers may be interested to learn that it takes many years before gas bubbles in polar ice sheets are sealed from contact with the atmosphere. ”
      ========================================================

      Pfft. No news to us mate. We probably knew that before you did. Don’t come the all knowledgeable omnipotent one with us.

      …So you willingly admit that the graph that appears behind you is wrong, misleading and meaningless….. You admitted you spliced the proxy with the instrument record without labeling that fact on the graph…… Bit silly wasn’t it?

      Ray Bradbury said…. “Thus it is quite reasonable to plot the ice core greenhouse gas data with the instrumentally recorded data. ”
      ========================================================
      I’m sure there are better ways of showing comparisons?

      “We probably knew that before you did” cracked me up, as did “Ray Bradbury”. I hadn’t been to WUWT a while…

    • The Wattsians need to do this, because they are deep down inside well aware that you’ve exposed the Wegman report for what it is: an enormously shoddy piece of work. Their only way of soothing themselves is to attack someone else. McIntyre tried a more subtle approach (ahem): accuse Bradley of plagiarism and go after Wahl&Ammann again.

    • Coherence of argument was never a strong point at WUWT – basically throwing spaghetti against the wall to see what sticks.

      And their inability to understand the difference between actual measurements and proxies is mildly entertaining. Even I know that [CO2] in gas bubbles is a direct measurement of CO2.

  79. Man oh man… That WUWT thread contains some incredible examples of “in your face” stupidity.

    Here are some real “gems”:

    David L says:
    November 25, 2010 at 3:12 am

    Academics are so …….academic!

    I love your analogy of splicing Hawaii and Antarctica data together and splicing stock performances from two companies. I think that’s perfect! No true scientist would splice two different measurements together unless there was a very good proven correlation and/or causation between the two. For example, one could splice a temperature record obtained from mercury thermometers with one from thermocouples provided there’s accurate calibration between the two.

    stumpy says:
    November 24, 2010 at 10:38 am

    Does anyone know what co2 levels are for antarctica itself?

    I have seen themed maps of co2 levels globally from satellite, and they show antarctica as being lower than the rest of the earth. If thats the case, you cannot splice a record for antartica which has lower co2 levels, with the higher global record, you cant compare an apple with an orange – but then it makes a nice hocky stick for them to scare people with and generate funding / publicity.

    As an aside, the vostock core always fascintates me, the correlation with dust (cosmic dust?) and temperature – is it increased dust causing the earth to cool, or increased dust due to a drier cooler windier earth? We may never know!

    Ron Cram says:
    November 25, 2010 at 5:51 pm

    Ferdinand Engelbeen,

    My comment must have been poorly written as you have misunderstood me. I understand the time delay issue of Vostok ice cores. My comment did not relate to the ice cores but to CO2 measurements of the atmosphere in the Vostok region similar to the atmospheric measurements taken at Mauna Loa. I do not think you will find a direct correlation. It is certainly unscientific for Bradley (or whoever was responsible for the graphic) to splice data measuring atmospheric CO2 in Hawaii with CO2 trapped in ice cores in Russia.

    • stumpy says:
      November 24, 2010 at 10:38 am

      Does anyone know what co2 levels are for antarctica itself?

      I hope someoene pointed out that there’s no atmospheric CO2 in Antarctica, because it’s all fallen as CO2 snow!

      (favorite “science” moment at WUWT *ever*!)

  80. Gavin's Pussycat

    > ice cores in Russia

    Is this the Ron Cram we have come to know and love?

  81. Yes, that CO2 trapped in ice cores in Russia cannot be compared. He is right.

  82. with CO2 trapped in ice cores in Russia. Sounds like a gulag for CO2.

  83. Sounds like a gulag for CO2.

    The gulag bubblipelago?

  84. This is slightly off-topic, but I have posted a reply to some of Wegman’s comments at USA Today.
    (Unfortunately, one cannot link directly to the specific comment.)

    There is also a long reply to some of the earlier comments at USA Today. from several days ago.

    I have also asked, if Wegman and co are “not the bad guys,” maybe he would like to name those that are?

  85. Thanks, John Mashey and DC, for drawing much-needed attention to the abysmal scholarship and highly questionable sourcing of material in the Wegman Report. I think the history of this report as recounted here should be required reading for any student of journalism, as well as for the freshman class of new congress members–obviously not as an example of how to obtain biased testimony from credentialed “experts” to reach the outcome desired by an army of lobbyists and compliant staffers paid to influence representatives and the public, but rather as an example of how to recognize same when confronted by a coordinated campaign of disinformation specialists. I hope Tom DeLay’s recent conviction is an example of the justice to come for Joe Barton and his staff of anti-science propagandists, but I don’t expect much, now that the GOP controls Congress. [Note to John Mashey and DC: AT&T is watching you.]

    Still, I’d like to see the media shift their focus more toward Wegman’s misrepresentation of the supposedly independent statistical “analysis” and the fatal errors exposed therein by DC, and less emphasis be placed on the plagiarism allegations. The plagiarism and falsification accusations are well-founded, but they lend themselves to endless he-said/she-said arguments and counter-arguments of political bias. There’s enough detail in SSWR to limit the amount of wiggle room for Wegman, but he (and perhaps more importantly, the GOP ideologues in Congress) has shown himself to be an accomplished wiggler.

    It is much harder for Wegman and McI’s supporters to make a case that the technical shortcomings in Wegman’s report are attributable to the political beliefs of its critics, now that the origin of Wegman’s figures have been conclusively traced to faulty methods in McI’s code. Wegman’s statistical “analysis” is now exposed as largely a trivial exercise in repetition of M&M’s biased code, augmented by his vast ignorance and misunderstanding of the field of climate science, all dressed up as an “independent” evaluation for Congress.

  86. “Wegman’s statistical “analysis” is now exposed as largely a trivial exercise in repetition of M&M’s biased code”

    That’s what passes for replication these days.

  87. Taylor B
    In one sense, DC’s discovery of the statistics mess is the best one yet, and I do believe it will get increasing attention.

    The issue, as it’s been all along is:
    a) Experts didn’t think much of the WR stats when it came out, and said so at the time.

    b) In academe, mistakes get made, and under no circumstances does one want to have academic misconduct claimed for honest errors. Isolated errors just aren’t actionable.

    c) On other hand, plagiarism, especially of the blatant near-verbatim style that DC and then I found, (and then “terry” with Said’s dissertation), is easy to understand for most people and actionable. (Admittedly, Wegman doesn’t seem to understand it, and the denizens of certain blogs are absolutely sure that it isn’t plagiarism, i.e., Dunning-Kruger rules.)

    d) Fabrication/falsification sometimes take more expertise to recognize, and there is always the problem of distinguishing purpose from incompetence or honest error. There is more room for argument. I recommend a nice Presentation at OSU. See especially pp.4-5.

    e) I’m trying to get more comments from experts, but my belief is:
    1) A single instance of FF may be egregious enough to be obvious.

    2) Individually, a case may be marginal, but en masse, purposeful behavior seems very likely. This is one of the reasons SSWR documented so many. I originally had a code for Bis that allowed for (strengthening MBH vs MM), but I never saw any, only the reverse.

    For instance,DC’ pointed out in “divergence problems”, some direct contradictions to Bradley, done amidst near-verbatim plagiarized text. That seems pretty blatant.

    On the other hand, the WR inserts “confounding factors” 5 times in a a page and a half. Any one of them might be OK: there are lots of confounding factors. But Bradley’s book is 600 pages, of which much enumerates the factors in great detail and describes the techniques for dealing with them. If someone unfamiliar with this reads the WR, what opinion would they form?

    f) So, one instance of plagiarism might just be a goof. For example, if the only case were the PCA text from Jolliffe’s classic, people might say “sloppy, but no big deal, it’s not a lot of text.” However, seen from the overall context of pp.13-22 of the WR, almost everything is near-verbatim cut-and-paste from somewhere, and as DC says:

    “Finally, the PCA and noise model section discussed above clearly contains the least “strikingly similar” material. But the surprise here is that there is any at all. Not only that, but changes made by Wegman et al have apparently introduced errors. Moreover, the sheer number of apparent sources and relative brevity of the antecedent passages means that additional antecedents can not be ruled out.”

    g) So, the statistics problems DC found are more than enough to sink the statistical credibility of the WR permanently, but that still might (barely) be interpreted as incompetence. But, in the context of all the other problems, it seems much harder to do that. Napoleon said “Never ascribe to malice that which can be explained by incompetence,” but in this case there seems plenty of both.

    h) In this case, the errors in statistical models are probably the most important, but it will take effort to make that clearer to a larger audience. On the other hand, the 1% cherry-pick is easy to understand and explain: if you want to prove men average 6’6″ in height, it’s easy to do so: visit and NBA basketball game and sample the men found on the court, then just don’t bother to mention that.

    • John, I see your points and they are well taken. It’s important to recognize how the entire constellation of poor scholarship and analytical bias (if it can be described as such) is mutually reinforcing, i.e., individual errors can be corrected or explained, but the entire document and its provenance makes a powerful case for intentional and actionable misconduct, not just a “whoops-my bad” response.

      Even if though Wegman was spoon-fed faulty code that was intentionally constructed to produce a misleading impression of MBH’s work, he could (conceivably) claim he honestly didn’t understand what he was given or what he was being asked to do with it (ah yes–time constraints and lack of competency in the climate field). It strains credulity to the breaking point to believe he could be so easily manipulated without his knowledge, so Occam’s razor strongly favors the explanation that he knew very well what he was doing and being asked to do: mislead Congress and the public. The provenance of poor scholarship in the report builds what appears to be an irrefutable case against him. And if your and DC’s persistence in digging further goes on to the heart of this deception, my hope is there will be a case against those in Congress who instigated it.

      I wish I could be optimistic that your work would lead to formal charges against Joe Barton and his staff, but since we in the U.S. no longer have qualms about our former president admitting on national TV that he authorized the illegal torture of prisoners [to extract false confessions to support ginned-up allegations of a connection between Saddam Hussein and Al Qaeda, no less–but I digress], I don’t expect this to happen.

  88. On the other hand, the 1% cherry-pick is easy to understand and explain

    If only. According to the followers of WUWT and McIntyre, the 1% cherry pick never happened. It is is figment of ‘teh liberuls’ collective imaginations. They wish it to be so, and so it is. This is what it has come to.

  89. Of course, the other good thing to come out of this analysis is that Judith Curry is shown to be a self-appointed, 24 carat liability, as if that wasn’t already obvious.

  90. Steve:

    I long ago gave up trying to convince extreme Dunning-Kruger afflictees of anything. I have never once seen it work. Some remain absolustely sure there is no plagiarism in the WR, even after seeing:
    a) DC’s analyses
    b) My further batch
    c) Comments by 3 plagiarism experts

    Of course, some are even more sure without even seeing them…

    Since this statistics mess is really important, it is important to summarize and condense it to be really understandable to a broader audience.
    The reason I pick the 1% cherry-pick (there should be a cherry-pick index to complement McI’s hockey-stick index) is that it is a really simple analogy:

    One can “prove” that men average about 6’6″.clear. (This is easily done by sampling those found on an NBA basketball court). Although in this case, it’s more like proving men average 350 pounds (or whatever) by sampling champion sumo wrestlers.

    • Gavin's Pussycat

      John, Steve, DK is part of it but not the whole story. What is also at work is folks who are deeply vested into the rejection of climatology — and for some reason, especially paleoclimatology — as a legitimate science. Backing out from that is just too painful. Some of what I’ve seen smells of quiet despair, notably the attacks on Bradley.

      Now for the population at large I don’t think this dynamic works. They understand damn well that a scientist caught lying — and plagiarism is a lie — is an ex-scientist. Just like a broken watch, that also may be giving the right time twice a day but has lost its usefulness as a timepiece.

      So, yes I think the exposure of plagiarism is useful all on its own. But of course the full story must be told.

    • GP: Certainly, DK isn’t the only reason, it’s simply one of the more obvious.
      Some of my past pieces have used this catalog of reasons for climate anti-science.

      PSY5 = Dunning-Kruger
      PSY8 = personal anchor, which is what I thinking you describe, usually backed by one or more of the others.

      Like climate science itself, overly-simple analyses of motivations are inadequate as well. But DK always helps… :-)

  91. > the 1% cherry-pick

    Could someone produce a ‘deck of cards’ with all the output, displayed somehow, so people could look at a few pages of thumbnails and see the variety?

    Then maybe a little tool to select from all the outcomes either a proper random sample (different each time, of course) — and average that and display the average, or all of them overlaid, or deviations, something to illustrate what’s done for those who won’t ever understand a text description?

    If that’s doable it’d be easier then to illustrate how to pick a cherrypicked collection, average that and display it?

    Sometimes “show rather than tell” is convincing.

    Yeah, I know, it’s a ‘simple matter of programming’ and I’m not a programmer, nor a statistician. My one percent contribution is done ….

    • How about something a little less ambitious – modifying M&M script so as to save *both* the 1% sample and a true random sample. And then generate a selection of hockey sticks from both (i.e. as in Wegman Fig 4-4), and show the two “versions” of Fig 4-4 side by side.

      What you would see in the “random” selection would be approximately equal number of upward and downward hockey sticks, and with HSIs from 1 to 2 (median of 1.6), instead of all 1.9 or higher. Still all recognizable hockey sticks, but with some upside down and in general not as pronounced. (And yes the orientation doesn’t matter to the subsequent regression against instrumental temperature. But it should still be shown and explained, not ignored for visual effect. )

      This is part of a more ambitious project I’m considering, which would look at the effect of a number of processing options, under different “noise” models. But that first step might happen fairly soon.

    • Gavin's Pussycat

      Hank, I love the ‘basketball court’ imagery. Very graphic, and — look Ma, no math!

      It should be elaborated. Surely Peter Sinclair knows what to do ;-)

  92. William Holder

    [DC: Please read the comment policy and try again. Or not. Thanks!]

  93. I went back and checked the quote Steve Metzler gave above.

    The ARFIMA noise produced pretty hockey sticks but introduced a secondary complication and replications have focused on AR1 examples. To set parameters for the simulation, we calculated AR1 coefficients on the North American AD1400 tree ring network using a simple application of the arima function in R: arima.coef = arima(x,order=c(1,0,0))

    It comes from this “Tangled Web” CA post in the aftermath of the Wegman report and Ritson’s critique (which has now been resurrected).

    Here McIntyre seems to suggest that Ritson was criticizing the so-called empirical AR1 null proxies (parameter derived directly from each real proxy series).

    But Ritson was clearly referring to the “full acf”, i.e. the ARFIMA used in M&M and mechanically reproduced by Wegman et al. Sure, the proxy-derived AR1 is in the code (it’s the “arima” option), but there is no indication that the results were ever published. Yet McIntyre seems to imply otherwise. Notice also that McIntyre refers to both noise models as red noise.

    McIntyre also seems to suggest that Ammann and Wahl were discussing this so-called “emprical AR1″ noise model in their paper. But they too were criticising the “full acf” ARFIMA null proxies.

    Not so coincidentally, McShane and Wyner also discussed the Ammann and Wahl supposed critique of “empirical AR1″. (That’s a mistake I didn’t notice on first reading, as I hadn’t looked closely at M&M 2005 nor Wegman et al section 4 at that time).

    I wonder how that happened?

    Then there’s the obscure reference to “replications”. Does this mean that the McIntyre is claiming Wegman ran the arima(1, 0, 0) proxy-derived noise model?

    Tangled web indeed.

  94. Just a note- the various submitted-for-publication critiques of McShane and Wyner 2010 are appearing as drafts at various places. When is the street date on the journal issue containing them (is it the same as the actual publication of MW), and is there any idea of how many there will be?

  95. Hi DC,

    I’m sure this happens quite often, but it’s pretty easy to know what W. Holder’s comments were: no doubt the identical ones he posted here, here, here, here (with the added creative opener of “You clearly have not spent enough time researching this issue”), here, and here, which, interestingly, are the same comments posted by “averageguy” here. In fact, typing a couple of phrases from Mr. Holder’s comment into scroogle.org turned up about a dozen identical posts in late 2009 to early 2010 in response to articles about the UEA e-mail theft, at mostly obscure and varied websites (i.e., ones where the audience isn’t likely to have much knowledge of climate science) rather than the most prominent climate websites. I wonder what else Mr. Holder does in his spare time (and how well he knows Mr. Averageguy, assuming they aren’t the same person)? I expected to find the exact same text quoted from the Heartland Institute, or some other such industry-funded source, but it didn’t seem worth the effort to keep looking.

  96. William Holder

    Hi DC. I reviewed the comment policy and don’t understand where I went wrong. If you don’t feel the comment is appropriate here, please offer me a response via email. Thank you. William.

    [DC: Your last comment was off-topic and a recycled denialist talking point, with a link to a website known to propagate misinformation. That’s three problems right there.

    Now you’ve added discussion of comment policy itself, once again off-topic and not a subject of interest. This will be the only explanation you’ll get. If you don’t like or understand the comment policy, there are plenty of other blogs for you try. Thanks! ]

  97. John Mashey and Deepclimate should be congratulated for this work. I have always thought that some of the main stream have conceded too much by ignoring this topic which is still very much alive for the contrarians.

    By the way I wonder if anyone knows where you can get a copy of Tamino’s 5 essays on MBH etc.?

  98. harvey | November 21, 2010 at 6:53 pm and the inline comment by DC

    I think that you are both right. From the moral standpoint the significance of a lie depends on its real and potential consequences. That is why I agree with DC’s use of the term strained “analogies”. Lies intended to achieve genocide are not analagous to lies intended for other purposes.

    But Harvey is also right. His comment, which was slightly off topic, was about the wider issue of the blogosphere and the large media effort. If you ignore the goal, we are not talking about an analogy at all. The “manipulation of the masses” in the two cases uses many of the same techniques which are in these respects two examples of the same phenomenon.

  99. Off-topic but may be of interest, a 1997 Wegman et al paper seems to contain a slab of text (200+ words) also found in a 1995 GMU PhD thesis by Grossman. Wegman et al. don’t cite any work by Grossman (or vice-versa). Googling turns up the same text in a 2008 patent application by Al-shameri&Wegman but no other sources.

    From Wegman et al., (1997) Statistical software, siftware and astronomy, in Statistical Challenges in Modern Astronomy II

    Both DBMS and information retrieval systems provide some functionality to maintain data. DBMS allow users to store unstructured data as binary large objects (BLOB) and information retrieval systems allow users to enter structured data in zoned fields. However, DBMS offer only a limited query language for values that occur in BLOB attributes. Similarly, information retrieval systems lack robust functionality for zoned fields. Additionally, information retrieval systems traditionally lack efficient parallel algorithms. Using a relational database approach to information retrieval allows for parallel processing since almost all commercially available parallel engines support some relational database management system. An inverted index may be modeled as a relation. This treats information retrieval as an application of a DBMS. Using this approach, it is possible to implement a variety of information retrieval functionality and achieve good run-time performance. Users can issue complex queries including both structured data and text.

    The key hypothesis is that the use of a relational DBMS to model an inverted index will: 1) Allow users to query both structured data and text via standard SQL. In this fashion, users may use any relational DBMS that supports standard SQL; 2) Allow implementation of traditional information retrieval functionality such as Boolean retrieval, proximity searches, and relevance ranking, as well as non-traditional approaches based on data fusion and machine learning techniques; 3) Take advantage of current parallel DBMS implementations so that acceptable run-time performance can be obtained by increasing the number of processors applied to the problem.

    From a 1995 GMU PhD Thesis by David Grossman:

    Both DBMS and IR systems provide some functionality to maintain data that is not intuitive to their approach. DBMS allow users to store unstructured data in Binary Large Objects (BLOB) and IR systems allow users to enter structured data in zoned fields. However, DBMS offer only a limited query language for values that occur in BLOB attributes. Similarly, IR systems lack robust functionality for zoned fields. Additionally, IR systems traditionally lack efficient parallel algorithms. An inverted index may be modeled as a relation. This treats IR as an application of a DBMS. Using this approach, it is possible to implement a variety of IR functionality and achieve good run-time performance. Users can issue complex queries including both structured data and text. A request to find articles containing vehicle and sales published in journals with over 5,000,000 subscribers requires a search of unstructured data to find the keywords vehicle and sales, and structured data to locate circulation data. Our key hypothesis is that the use of a relational DBMS to model an inverted index will: Allow users to query both structured data and text via standard SQL. In this fashion, users may use any relational DBMS that supports standard SQL. Allow implementation of traditional IR functionality such as Boolean retrieval, proximity searches, and relevance ranking. Take advantage of current parallel DBMS implementations so that acceptable run-time performance can be obtained by increasing the number of processors applied to the problem.

    • Hmmm…since you mention patent. Wasn’t there a comment from Wegman on his Facebook that a friend was not happy about one of Wegman’s patents?

      Not sure the two are related.

    • Good catch! This explains a lot, actually. Note that the circumstances are similar. The astronomy paper is also a review of sorts. It appears that Wegman has a habit of stuffing these kinds of papers with text “borrowed” from someone else. From his responses wrt. WegReport plagiarism it seems he simply does not properly understand what plagiarism is.

    • PolyisTCOandbanned

      Would be helpful to give the complete list of others (not Wegman et al.). For instance is SAid in there, is Grossman hinself in there? Not arguing one way or another about the responsibilityes of coauthors, by this comment, but it would just help to have this detail…

    • Statistical Software, Siftware and Astronomy By
      Edward J. Wegman, Daniel B. Carr, R. Duane King, John J. Miller,
      Wendy L. Poston, Jeffrey L. Solka, and John Wallin

      Current version at GMU:

      http://www.galaxy.gmu.edu/papers/astr.html

      Earliest archive.org version (appears identical) in 1998:

      http://web.archive.org/web/19980121024238/http://www.galaxy.gmu.edu/papers/astr.html

      The same passage is also in this “Guide to Statistical Software”, which appears to contain the same material as the above article, but without certain introductory and other material, and without authorship information:

      http://www.galaxy.gmu.edu/papers/astr1.html

      Earliest known version on archive.org is from July 1997:

      http://web.archive.org/web/19970722090726/http://www.galaxy.gmu.edu/papers/astr1.html

    • The authors of Statistical Software, Siftware and Astronomy are Edward J. Wegman, Daniel B. Carr, R. Duane King, John J. Miller, Wendy L. Poston, Jeffrey L. Solka, and John Wallin – and its on the web at http://www.galaxy.gmu.edu/papers/astr.html

      But its a boring review which I came across googling for something else, scanned through it because I saw Wegman’s name and noticed this slab of text didn’t make sense in context. The only real interest is it does suggest a pattern of behavior at GMU.

  100. William Holder

    Hi DC. It certainly relates to the broader topic of AGW. If there is a more appropriate place to ask this question within your forum – please direct me to it. The link was to a graph – not the website itself. If you can suggest a more appropriate link for the graph I’ll be happy to use that one. I was just looking for an informed opinion from someone who clearly has devoted a lot of time to this subject. It’s difficult for me not to be a skeptic when someone who purportedly is focused on the science of global warming dismisses an honest attempt to learn more. Yes, I can find many people who agree with me but that won’t make me better informed. You have, in fact, by using the term denialist and refusing to share your insight with me reinforced my opinion. Despite your extensive research and lengthy posts, you appear to be very narrow minded. It’s very clear why support for AGW has waned greatly over the last few years – and preaching to your choir has little impact on the debate – what a waste of your life. I will unsubscibe and leave you to it.

    [DC: Oh, I see. You consider anything vaguely related to AGW to be on-topic, and a link to a graph on a website not to be a link to that website. Nice try.

    Frankly, I don’t believe that this is an “honest attempt to learn more”. Not when I read comments like this concerning the Cuccinelli witch hunt:

    “If there’s nothing to hide – just present the documents and save us some money.”

    http://www.huffingtonpost.com/social/William_Holder/cuccinelli-revives-witch_b_750240_62910410.html

    Or this:
    “Sean is right. It’s guite troubling to see the University operate to hide the details of this research – it should be just the opposite.”

    http://www.cavalierdaily.com/2010/10/06/cuccinelli-orders-new-investigation/#comment-19211

    If you did want to post on topic, perhaps you could comment on Edward Wegman’s and Barton staffer Peter Spencer’s lack of transparency about the Wegman Report.

    Mann et al 2008 released all of their code and data. In marked contrast, four years after the Wegman Report, the data and computer code promised by Wegman has still not been released. What have Wegman (and Spencer) been hiding? We now know at least part of the answer. And it’s not pretty. ]

  101. Gavin’s Pussycat

    Thanks for the links

  102. Grossman wasn’t on Wegman’s list of students, as far as I can tell, and this was long before Said came along.

    But I would suggest a separate thread for this and related topics, to leave the statistics uncluttered.

  103. Someone somewhere in the WR discussion stated that the SNA stuff was just lying around so they threw it in. SNA was part of the patent file and was certainly lying around in 2007. The application was 2007. It may be good for the patent holders to raise awareness of SNA in government . Throwing it in would be good PR. CV padding for Wegman and Said.

    A surface skim of the patent showed a model! But TWatt has shown us that models are always wrong! Crossed wires in the Tea Room?

    My quick glance picked up a chart that if not written by the respectable Mr. Wegman would seem to me to be racial profiling. At the end, another chart showed how the model did predicting crime. Yes, surprise, race was linked to crime rates. Teas all around. Turns out, Wegman’s model wasn’t too efective. The real crime rates in Fairfax County Virginia were up to 50% higher than his predictions (no entry for academic fraud Mr. Cuchinelli ).

    On a different topic , Richard North’s whine to the press truth panel was slapped down. The British, confronted by a lie from his blog can now say it is a lie .

  104. I do not think SNA was just lying around so they threw it in.
    SSWR W.5 talks about this.

    It was the academic-facade for:
    1) real mission #2 (SSWR p.1)
    2) Meme-b (p.10), with the antecedents shown, and the uses In the Index (p.8).

    And DC’s great find [SAI2007], that which disappeared during “Wegman’s bad week”, illustrates how important it was.
    See SSWR (p.94), where Said says they “were pressed hard not to testify on the social networks analysis.”

    They were strongly motivated to have the SNA.

  105. I meant to say the SNA stuff was lying around in 2006. Sometimes I tipe pour. A Watt says I can’t spill.

  106. Pete Dunkelberg

    DC: Even though Spencer apparently first met with Wegman and Said in September 2005, nothing much happened until much later.

    So was Spencer satisfied that they were his team? Or did he feel out others? How did he spend his time? Obviously you don’t know yet….

  107. Pete:
    Well (using SSWR’s notation):
    1) We know from [SAI2007] that Coffey contacted Wegman 09/01/05.
    I really doubt that Spencer would have gone forward withouta pretty good idea of what to expect. if you haven’t read about Coffey, please do. His views are *very* clear, and his use as a contact is informative.

    2) The date of MM05x was mis-ascribed as 09/07/05, which hints that Wegman got it that next week. as part of the first batch.

    See my long comment under USA Today that bears on this. Wegman claims that Said’s comment about “daunting amount of material” is wrong, that Spencer only sent him a dozen or so papers. I speculate that he has chosen his words carefully, and explained what he could have meant. Said’s comment was very explicit, and the evidence seems strong that she did most of the bibliography, or at least, it wasn’t Wegman.

  108. Hmmm…John, does not the fact that one person was not overly concerned about the amount of literature needing analysis, whilst another found the prospect “daunting” mean that a possible explanation lies in the vagary of individuals’ subjective interpretation? Whether willingly or unwillingly [unlikely] Said appears to have been presented with a stack of literature which she found “daunting”. However, can we genuinely say that this reaction was due to the sheer amount of literature provided, or may it have been more simply because much of the the subject matter was alien to her? If the former then fine, but if it’s the latter and the “daunting” refers more to the gradient of the learning curve she needed to ascend (i.e. the technical density of the material rather than its volume), then could I suggest that both Wegman’s and Said’s positions are tenable.
    Unfortunately, I haven’t read your report so I’m not sure if you have nailed this point down [sorry, too much ‘technically dense’ material on my desk at the mo!]

  109. Hasis:
    Said’s powerpoint presentation stated that 127 papers were sent. Wegman sais Said was wrong, it was 11 papers and 2 IPPC chapters.
    Nothing to do with the hockey referees finding climatology too hard to understand.

    • Not quite:
      “Reviewed some 127 technical papers related to paleoclimate reconstruction.” [Ed – So that doesn’t necessarily mean all were sent by Spencer, but clearly there was a “daunting” amount of material in the end.].

      Still, there is a discrepancy. Wegman’s count is almost certainly too low. If one were to count all of the MBH (2) and M&M papers (3), plus corrigendum (1), comments on M&M GRL 2005 (2) and replies to those comments (2), that would come out to 10 papers right there. Of course, it’s not clear that Spencer actually sent all of those, especially the bothersome comments on M&M 2005 by von Storch and Huybers. And Spencer very likely did not send Wahl and Ammann either, although he was well aware of it.

      Having said that, Said’s count seems inflated.

    • Ah, I see. Thanks John

    • Whether Peter Spencer sent them x or y papers seems rather moot. What astounds me is why a team of scientists asked to review research findings should be provided reading materials by a political staffer in the first place and why that group of scientists felt obliged to use that list as the basis for their inquiry, as if this were a high school research project with a prescribed reading list.

      Academics have full access to libraries and, um, a social network of colleagues and contacts who they can turn to for advice on what the key background papers are. And, like the rest of us, they can Google away and access the IPCC reports freely on-line.

    • Whether Peter Spencer sent them x or y papers seems rather moot. What astounds me is why a team of scientists asked to review research findings should be provided reading materials by a political staffer in the first place…

      Oh, it’s even worse, because they were asked to provide and *independent review*. Basing one’s review on reading materials provided by a political staffer you’re supposed to be independent of rather misses the point of independence …

  110. 1) Read USA Today Wegman Roundup, specifically my comment (can’t link directly).

    I outline some of the different possibilities. Ntoe that actual wording is important:
    *Wegman* may have been sent material. *Said* may have been sent much more material and she seems most likley to have assembled the Bibliography.
    After all, SSWR notes the general strangeness of the Bibliography, which very well could be 80 of 127 items, which some came from Spencer, some directly from MM+TT, and some by their own research (little). In addition, Wegman often seemed unfamiliar with key references.

    2) But here’s the developing analogy.
    One sees a car at point A, and 1 hour later, sees it 100 miles away.
    From A to A1 (10 miles) there is only one road (Wegman’s aknowledged set from Spencer).
    From A1 to B, there are multiple routes (from Spencer or from MM+TT directly). We don’t know which route was taken, although the nature of the material makes it quite likley that much was originally suggested by McI (or McK), regardless of the route by which it got to Said.

    But of course, they shouldn’t ahve been getting *anything* much from Spencer, especially the PPT of a MM talk for Gerge Marshall Institute.

    3) But in any case, no matter which rotue they took, in most US states, they broke the speed limit …(not independent) UNLESS a plane picked them an flew them to Pt B (they did all the research themselves). This is possible, but if Said thought that a daunting amount of material ahd been sent, the likelihood that she did the research herself is … low, especially given the other facts known of her.

  111. I’ve long presumed that among the documents sent by Spencer was the key May 11, 2005 M&M presentation in Washington DC. That was sponsored jointly by the George Marshall Institute and the Competitive Enterprise Institute. The participants in a mysterious second meeting/presentation on Capitol Hill that same day have never been revealed.

    This presentation not only summarized the M&M critique, but also laid out the case for lack of independence between paleoclimatologists, as well as lack of independence of reconstructions (overlapping proxies). Thus, this appears to be the direct inspiration for the “social network analysis” of co-authors (Fig 5.1 to 5.7) and of proxy overlaps (Fig 5.8) in Wegman et al section 5.

    On p. 18 of the presentation, we read:

    2nd Period: M&M05a&b
    – PC Algorithm unravelled
    – “Detects” hockey sticks as dominant pattern (PC#1) even in red noise [Emphasis added]

    Not “persistent red noise” or “trendless persistent red noise” – just red noise.

    • “Not “persistent red noise” or “trendless persistent red noise” – just red noise.”

      It’s those little details that can give the game away. “We are not the bad guys…” says Wegman. So, are there bad guys as far as Wegman’s concerned? I’m not the only one to have picked up on that, either.

    • Gavin's Pussycat

      Careful there… it may well be true that with short centering (and variance normalization), a hockey stick like pattern tends to become PC1, against whatever type of noise is used as a background. That would be because noise has no pattern, so any pattern that is imposed (in this case by short-centering) will tend to stick out. I don’t find that surprising.

      The interesting question to ask is, what happens when the data is not noise, but contains a real, physical hockey stick. Does its reconstruction change due to the short-centering? We know the answer to that one, don’t we.

    • GP,
      Sure – and that’s a point I made out the outset. W&A showed the minimal effect on the actual reconstruction very clearly, and Wegman et al excluded that whole discussion.

      But the demonstrated biasing effect of “short-centred” PCA applied to noise does depend on the noise model and the details of the procedure (including full normalization of the series beforehand), both of which were highly questionable in M&M and tended to exaggerate the effect. However, there is some effect on PC1, no matter what noise model is chosen.

      Still, my point is much more mundane. It’s simply this: It appears this presentation was a key document in guiding Wegman et al at the outset of their project. So from the beginning they had a misleading of impression of what M&M actually did. It’s dead wrong for McIntyre to describe M&M’s noise model as “red noise”, let alone “persistent red noise”.

  112. Yes, and those are only a few of the Memes teachable back to that presentation. Thus newer stuff just reinforces SSWR’s hypothesis that that document was the main inspiration/plan for the WR.

  113. Pingback: George Mason University “Climate Change Communicator of the Year” – where only one viewpoint is allowed « Wott's Up With That?

  114. Pingback: The surfacestations paper – statistics primer « Wott's Up With That?

  115. Pingback: Wegman paper retraction by Journal « Wott's Up With That?

  116. Pingback: Nielsen-Gammon interviews North and others on Wegman – plagiarism may be related to a cultural misunderstanding by foreign exchange student « Wott's Up With That?

  117. Pingback: أهم 10 فضائح علمية في 2011 حسب The Scientistـ (1) « باحث

  118. Pingback: Wiley coverup: The great Wegman and Said “redo” to hide plagiarism and errors | Deep Climate

  119. This is a long time after the original post, but… my feeling is that at this stage in the proceedings, we should have been able to get a lot more mileage out of the fact that the Wegman Report was such an obvious stitch-up.

    OK, so you can’t get your average WUWT sycophant to even read a few paragraphs into what DC documented here, but hey, what about rational people who actually give a sh#t?

    It’s obvious that McIntyre way overcooked the red noise, and then mined the top 1% of his subsequently flawed PC analysis. So, why can’t we get people to see what actually transpired here? Why is this so difficult?

    In no way is this an impingement on what DC has accomplished with his analysis. It’s just frustrating to see so much being made of the plagiarism aspect of the Wegman Report, when the statistical bit of the analysis is actually the most compelling part of the rebuttal :-\

    • “It’s obvious that McIntyre way overcooked the red noise, and then mined the top 1% of his subsequently flawed PC analysis.”

      For what it’s worth, this is a topic I’m planning to return to, along with other issues in the M&M analysis and Wegman’s endorsement of same. I’ve done a fairly complete comparison of short-centred and conventional PCA, using a variety of noise models and parameters, including various “empirical” noise models. This will probably go over several posts, but I do need some more time to do the all-important scene setting post laying out the main issues as I see them.

      For example, it’s important to understand the context of PCA in the overall MBH98-MBH99 procedure (i.e. data reduction of more densely sampled proxy subsets). And it’s also important to identify the effect of other changes that M&M made to the MBH98 PCA algorithm beyond correction of the short centering. M&M did not use standardized PCA nor did they employ any PC retention criterion in their emulation of MBH98. On those grounds alone, their “corrected” reconstruction in the 2005 E&E article has no validity.

    • DC, it would be interesting to get Tamino’s feedback on what you come up with, and also he’s had Joliffe commenting over there a couple of times. Might be able to get more exposure …

    • Thanks for the reply, DC (didn’t really expect one), and good to know you’re still on the case!

    • When it comes to academic misconduct:
      1) Plagiarism may be hard to find, but once found and presented carefully (as in the side-by-side highlighted form DC and I use), almost any committee can see it quickly, even without field expertise.

      2) Falsification/fabrication are harder, at least for the sorts seen in the WR, in the sense that they may be recognized to be wrong, but the question is was it deliberate/reckless, etc, i.e., one needs to look at the patterns, and some efforts may take domain expertise. For example, try Strange Falsifications in the WR.

      3) Most of the statistical arguments are harder yet, as they definitely need domain expertise, and again, one needs to look at patterns to distinguish between incompetence and malice.

      Anyway, Strange Scholarship covered a vast array of problems, not just the plagiarism … but that is the easiest to explain.

  120. @METRO
    Money will turn peoples minds from water to jello