Open Thread # 5

Seems like a good time to start a new Open Thread and close the old one (a.k.a. the “TCO on Tiljander/MMH/CA/life in general” thread). Honours this time go to Gavin’s Pussycat, whose comment will start things off.

About these ads

154 responses to “Open Thread # 5

  1. Deep Climate (for Gavin's Pussycat)

    Reposted for Gavin’s Pussycat from Open Thread # 4 | August 12, 2010 at 1:23 pm

    Off topic (to the extent possible for an open thread):

    Anybody heard anything more than rumors about a thing called “Judithgate”?

    …no, not that Judith… Judith Lean. The Solar physicist.

    It started on a Chech website back in June. The accusation was that Lean was the only Solar expert on the IPCC team, and she had been referring only to her own paper with Frohlich. There was also something about faking a graph.

    The thing seems to have “gone viral”, but only on the more extreme conspirational sites, including of course The Motl. Bishop Hill mentions it shortly, with a question mark (interesting! Thinking of his “reputation”?). And that’s where Judithgate stands today. I learned about it by accident through a letter in a local paper.

    Reading up on what the IPCC actually wrote about solar makes the story look very improbable. Is that why it has died a quiet death? What happened to good old denialist gullibility?

    Enlightenment welcome.

  2. It’s Czech, not Chech ;) And no, I haven’t.

  3. It is the old PMOD (the “faked” graph) vs ACRIM debate repackaged as a conspiracy theory. Supposedly, ACRIM was “suppressed” to hide the fact that the Sun is driving the warming. Yaaaawn.

  4. “JudithGate”, see my comments at the blog which I believe started it off:

  5. Too bad wordpress ate the posts where tamino showed that the difference is negligible. Maybe we could do some blog science ™ and debunk the conspiracy, if people are interested.

  6. SM has provided an analogy that nicely explains the supposedly too-narrow CI’s. I think we-who-are-not-statisticians have some new things to learn…

    See here.

  7. MrPete,

    I think the “stratified” groups analysis is a red herring. The spread of trend slopes for runs of models that have multiple runs is beside the point.

    Let’s compare apples with apples. MMH 2010 table 1 shows a trend with “stdev” (“std errors” according to the caption) for each model. Those s.e.’s range from 0.02 up to 0.1 and beyond. There is not one that is as low as the “models” trend s.e. which is only 0.013. To use McIntyre’s analogy it’s as if we had a spread of income in our “representative” median district that was much lower than in any of the actual districts.

    So where does this “models” trend s.e. come from? That’s what needs to be explained. Here is one possibility that I posted at Moyhu, so I might as well post it here (albeit slightly edited).

    For the 23 MMH LT model trends in table 1 , I get a mean of 0.243 and stdev of 0.073. Dividing by sqrt(23) yields an se for the mean estimate of 0.015.

    But if I use a weighted distribution (as described in the paper), I get a mean of 0.262 (which matches the figures), and a stdev of 0.065.

    Mean trend estimate s.e. then becomes 0.065 / (sqrt(N=23)) = 0.013. Which is exactly what is shown in MMH 2010.

    So it looks like the weighting scheme made a tight CI even tighter.

    Having said all that, I don’t know if that calculation is the correct derivation or just a coincidence.

  8. PolyisTCOandbanned

    SM is trying to redirect the conversation to a discussion of general issues woith the models. But that’s not the point of contention wrt figures 2 and 3. Those whiskers are TOO NARROW. The different models are not independent samples of a population. I could take the identical model, give it different names, ship it to 500 groups, and drive the SEM to be tiny. But no one would think that my predictive ability had increased nor that “the models” could be shown inconsistent with reality based on observations outside the SEM whiskers.

    The point I made about how we look at IPCC century predictions is DEAD ON to this issue. And that’s what McIntyre clipped last night when he decided to win an argument by moderating. And now he’s running around trying to do damage control since the argument’s moved offsite. The whole thing is funny…

  9. “Mean trend estimate s.e. then becomes 0.065 / (sqrt(N=23)) = 0.013. Which is exactly what is shown in MMH 2010.”

    I suppose the independence assumption would actually imply 0.065/ sqrt(N-1 = 22) = 0.014, which is higher (by 0.01) than the published s.e. figure. Perhaps this is an artifact of the panel regression. Or perhaps the independence assumption is used to combine the individual model s.e.’s in a way that results in the very low “models” trend s.e.

  10. Be careful when applying SM’s “stratified groups” analogy to MMH. He himself says this is a simplified example to get a point across, and that the technique McK used is more sophisticated than that. But (he claims) it is the grouped vs inter-grouped information that is key.

    TCO, to what extent do you think your claim is true, that “the different models are not independent samples of a population?” SM’s chart grouping runs by model is pretty telling: each model produces a very characteristic signature, tightly clustered.

    At this point all I’m convinced of is this: if each model produces tightly clustered results, it is invalid to average together all the model runs to come up with some kind of loose “average model.” Just like with school districts. Or any other clustered-data scenario.

    The underlying reason for the variability is what’s interesting to me… and unfortunately I don’t come anywhere near having the stats chops to really understand it.

    Closest I can get is to understand that there are three kinds of uncertainty for any given model (data, model, params), that all three may bear on the variation between models, and that we’re being too simplistic to just try to average the run results of all model runs when imagining what the (data) CI ought to be. And now I’ll step back, occasionally watch the fireworks, and continue getting caught up on Real Life. :)

  11. PolyisTCOandbanned

    Mr. Pete:

    1. Why don’t you debate me at Climate Audit? Oh yeah….your boss is censoring me.

    [DC: Let's stick to the substantive points, as in your point #2. I don't want an endless discussion of CA's moderation policy - I think you've made that point loud and clear. Mr. Pete is here, you're here - let's have a reasonable discussion. Thanks!]

    2. I’ve already given you a trivial example of how I could duplicate models. the added models would drive the SE of the mean down, mathematically, but that is not valid since they are not independent.

    This is different from fishing red and blue balls out of a bag. There, repeated fishes would give you a better idea of the mean.

    We don’t even use the models in the sense that figure 2 and 3 shows them. Look at the IPCC predictions for the century. We don’t use the SE of the mean. We understand that there are several different tools out there and are not sure which one is right. McI is messing up in figure 2 and 3.

  12. PolyisTCOandbanned

    Mr. Pete:

    Go look in Table 1 of the paper. Look at the trend for the GISS-ER runs. Table does not give trends of individual runs, just the average and standard deviation. But you will find the following listed under the heading: LT Trend (Std Dev), the following info: 0.258 (0.065).

    Take SD and double it: 2 times .065=0.13.

    Take average and subtract and add 0.13 -> 0.128 to 0.388.

    Is that a tight spread? Compare that to what McI draws for the box plot whiskers on his graph. WTH is he doing???

  13. Steve McIntyre on CA,

    Here’s another analogy taken straight from classic social science statistics on stratified samples.

    Let’s suppose that a school district has schools in many different neighborhoods and that there is considerable difference in family income between neighborhoods but little difference in family income within a neighborhood. A boxplot of family income by school would then yield a boxplot like the one that I’ve shown.

    Think now what a “median school” would look like. It would be the school with the median family income and, like all the other schools, would have relatively low within-group standard deviation.

    The “ensemble mean school” should not have properties that depart remarkably from the median school. Thus it is an ideal school that comes from a neighborhood with the overall average income and with a within-school standard deviation that is in some sense an average of the within-school standard deviations over the school district.

    Here’s what it wouldn’t be. It isn’t a school with an average income but with an income distribution that encompasses the entire city. (This is Gavin’s analysis.)

    Nor does it logically derive from the standard deviation of the inter-school average (the “SE” analysis).

    Both perspectives miss the stratified structure of the data and how it affects the analysis – hence, the wrongheadedness of the discussion by climate scientists on both sides.

    let’s take this to an extreme… suppose that internal variability is zero… then the “within group” s.d. is zero… suppose that models agree pretty well with each other and observations fall within the tight band of model projections… then by steve’s method you create the average of models and call it a model… with an s.d. of zero… show that the model falls outside the observational s.d. … proclaim that the model fails… claim that this is a test of modelling… hence extrapolate that all models fail… even though observations fall slap bang in the model range… this result is nonsensical… per tco it isn’t how models are used… where’s structural uncertainty?…

  14. Lazar, good thought experiment, but methinks you can’t have your cake and eat it too.

    a) No claim that *all* models fail
    b) If the “median model” is outside the range of observation, don’t we at least know that some number of models fail?
    c) AFAIK, they’re just using the composite the way IPCC uses it. Doesn’t their observation show that *something* has failed?

    Thus, not meaningless, nor nonsensical.

    Yes, lots of other interesting questions, such as which models succeed. And, why is there such little variance for a given model.

    Sleepy time…

  15. MrPete,

    Thanks for the response.
    If internal variability were zero and there were no observational measurement error, then the model average would certainly ‘fail’ this test. What does ‘fail’ mean? Not an exact match? “All models are wrong, but some are useful” — George Box. I suppose there’s some utility in determining ‘most models run warmer/cooler than observations’ but using the median or mean is… strange.. compared to testing models individually. And it says nothing about ensemble projections which are bracketed by structural uncertainty. Nobody expects the model average to be an exact representation of the truth, as James Annan pointed out recently, the idea that a model ensemble is centered on truth is probably empirically wrong and illogical.

  16. In this discussion and analysis of climate models, I have not seen a possible analogy in the modeling of the path that a hurricane will follow. When I look at hurricane pages they usually provide at least a dozen paths that models predict the path of the hurricane will follow.

  17. Gavin's Pussycat

    It seems that the elephant in the room is modelling error. What MMH seem to have done (?) is building an error budget where this element is missing from consideration, and then concluding that the error budget doesn’t close. Well, sure.

    Let’s remember — and it’s easy to have a blind spot for this — that also the ‘data’ is in reality the outcome of a sophisticated modelling process. This goes for the satellite data — RSS and UAH trends are still quite a bit different, which should tell us that there are things we don’t understand here. Radiosondes are worse still. All these data types have a history or sensor and data reduction problems, and it would be rash to think that the current generation would be free of those. The one I trust best for historical reasons, RSS, is also the highest. Perhaps ‘the truth’ is a little higher still.

    A second point is that all these observation time series take raw data from the same air mass over the same period — they should give the same outcome. Especially important is to note that all four are affected by the same, single instance of natural variability of the real Earth, which for the last decade or so seems to be trending down, against the forced trend. All four observation time series are strongly intercorrelated due to this common natural variability input, as can be clearly seen in the paper: “Observations” doesn’t really have much smaller whiskers than any of the four constituents. They are largely (apart from the trend) telling the same story.

    Of course the GCMs may be (are!) wrong as well, and it is known that mismodellings will produce an upward sensitivity effect much easier than a downward one — it is unsymmetrical. This was found, e.g., in the experiment. But my feeling is that the focus on GCM wrongness as if they were worthless junk, has been largely misplaced. This paper doesn’t help.

  18. PolyisTCOandbanned

    What’s really annoying is that the writers of MMH don’t make a clear explanation of what they are doing and why, so that the reader can decide if it is valid. They just slide that Figure 2/3 in there with the too tight whiskers. Now they defend it on the blogs and make “if-then” rhetorical statements. A real revealing scientist would have written MMH and defended the choices and been deliberate of how they differ with previous, within that paper.

    Plus, they are still wrong. We know the models differe model to model and don’t use them to create supertight estimates by averaging, because we know there is structural uncertainty.

  19. From the MMH conclusion:

    “In this case the 1979-2009 interval is a 31-year span during which the upward trend in surface data strongly suggests a climate-scale warming process. As noted in the studies cited in the introduction, comparing models to observations in the tropical troposphere is an important aspect of testing explanations of the origins of surface warming.”

    Translation: Yes, there is a long term trend of global surface warming (and, implicitly, relatively good agreement with the models), but discrepancies in the tropical troposphere cast doubt on human attribution and/or future trajectory of warming.

    No, that’s not in the paper, but here is McKitrick’s accompanying context, which goes a little further in that direction:

    In short, the climate models need to get the tropical troposphere right, since it’s a vast region where the models all show a relatively enhanced and rapid response to greenhouse gases. If the models and data don’t agree in that region, there might be a deep problem with the way the models represent the climatic system’s response to greenhouse gases.

    Somehow McKitrick never gets around to mentioning the the more likely possibility: discrepancies are down to problems in the satellite temperature series alluded to by Gavin’s Pussycat (not to mention the USCCP report).

  20. McKs school district analogy is missing the element of estimation of a real world value. The “ensemble mean school” is not an attempt to estimate anything, therefore it isn’t subject to the kind of uncertainty that model ensemble means are.

    Mr Pete says:
    “a) No claim that *all* models fail”

    Why even bother using an ensemble mean if the test isn’t against all models? If the claim is that the ensemble mean represents all models then the uncertainty in that ensemble mean needs to represent the uncertainty of all models, not the “average of the within-model standard deviations over all the models” as McIntyre put it.

    I mean you could define the uncertainty of an “ensemble mean” as the “average of the within-model standard deviations over all the models”, but comparing that with observed range wouldn’t tell you anything about the proportion of models whose range of uncertainty falls outside the uncertainty of observations.

    • Good points all – but one wonders how you could even “define the uncertainty of an ‘ensemble mean’ as the ‘average of the within-model standard deviations over all the models’ ” in this case.

      After all, most of the models only have run. That’s one more reason to consider this line a (very bright) red herring.

  21. Yes I mean the US CCSP Report “Temperature Trends in the Lower Atmosphere – Understanding and Reconciling Differences” [Chapter 5 - PDF]

    Making up for my previous laziness, here is the relevant section, from p. 90:

    6. In the tropics, surface temperature changes are amplified in the free troposphere. Models and observations show similar amplification behavior for monthly and interannual temperature variations, but not for decadal temperature changes.

    • Tropospheric amplification of surface temperature anomalies is due to the release of latent heat by moist, rising air in regions experiencing convection.

    • Despite large inter-model differences in variability and forcings, the size of this amplification effect is remarkably similar in the models considered here, even across a range of timescales (from monthly to decadal).

    • On monthly and annual timescales, amplification is also a ubiquitous feature of observations, and is very similar to values obtained from models and basic theory.

    • For longer-timescale temperature changes over 1979 to 1999, only one of four observed upper-air data sets has larger tropical warming aloft than in the surface records. All model runs with surface warming over this period show amplified warming aloft.

    These results could arise due to errors common to all models; to significant non-climatic influences remaining within some or all of the observational data sets, leading to biased long-term trend estimates; or a combination of these factors. The new evidence in this Report (model-to-model consistency of amplification results, the large uncertainties in observed tropospheric temperature trends, and independent physical evidence supporting substantial tropospheric warming) favors the second explanation.

    A full resolution of this issue will require reducing the large observational uncertainties that currently exist. These uncertainties make it difficult to determine whether models still have common, fundamental errors in their representation of the vertical structure of atmospheric temperature change.

    [Emphasis added]

    • Gavin's Pussycat

      Thanks DC, this is quite clear. It really is only the tropospheric data sets with their well-known problems, and of those only the interdecadal trends, that are the odd man out. Using a comparison with this to argue that the GCMs have a problem is really the tail wagging the dog. This should be obvious.

      And about the difference between surface and tropospheric trends, which the models get right, if this were untrue, it wouldn’t just be the models that are in trouble: even the back of my envelope would be malfunctioning, as this is basic, moist-air-in-a-gravity-field, one-dimensional physics.

      BTW an interesting detail from the paper: the difference between RSS and UAH is significant at p = 0.000. So the two satellite data sets reject each other! I wonder if this is a typo ;-)

  22. Gavin's Pussycat

    DC what do you think of the Zou et al. 2009 paper?

    • My climate laymans impression is that the paper puts the tropospheric warming trend back upwhere the models say it should be, is that right?

      [DC: It looks that way. But it's ocean only so the tropical LT trend can't be compared directly to model trends in Santer et al and MMH. Soon hopefully. ]

  23. Makes sense to me.


    Hmmm … I guess I wasn’t paying attention last November. ;)

    I wonder when they will release a full LT series. They seem to be getting close. The TMT trend shown is 0.131C/decade. RSS has 0.098C/decade.

    Meanwhile I can’t find Fu et al analysis anymore. (Tamino’s link is broken).

  24. Gavin's Pussycat

    Tamino is here.

  25. Comment on Ross McKitrick’s response.

    Key points…
    * ‘silly null’ test.
    * An ensemble is a single model created for making predictions. It is not “the models”.
    * A thought experiment… Ten models, nine match of which the observational record *exactly*, the tenth is some distance away. Internal variability is small enough such that an MMH-type mean trend plus internal variability ensemble model fails. MMH are wrong to make inferences about “the models” and “model trends” plural.
    * The MMH ensemble is not testing ensemble models as they are commonly constructed and used. But they are not testing individual models either. What are they testing?… something that no-one believes?
    * A prediction based on the assumption of a one hundred percent accurate and complete understanding of the physical climate system would be deemed as ludicrously overconfident.
    * James Annan on the inclusion of structural uncertainty; ‘testing modelling as an approach’ (paraphrasing)… truth being bracketed by the model spread, seems intuitively reasonable.

    • Gavin's Pussycat

      “Silly null” certainly. But more than that: a strawman. The belief that GCMs are free of systematic error may live with some in the general public, but nobody serious holds it. Pushing this strawman over is political, not scientific.

      Note by the way that MMH is ambiguous on the existence of modelling error: on the one hand they estimate separate, different b coefficients for the different models, on the other, estimate their variances from the temporal variations around those individual model trend lines only. And for the “observations”, they don’t even clearly acknowledge that modelling is involved, which historically has been plagued with serious error. That’s how you see such “interesting” details like RSS and UAH rejecting each other on the p=0.000 level :-)

  26. … by similar logic, I think it is wrong to make inferences such as “the models are consistent with” from testing an ensemble model… too binary… some models will be more consistent, some will be less… observations could fall outside a 2 s.d. spread, and match one model with close to perfection.

  27. PolyisTCOandbanned

    Of course, you’re evaluating modeling as an approach. Cripes.

    Makes me sad that Ross and Steve trip themselves up so easily. I’m kinda moving from thinking them tendentious and biased and intellectually dishonest, to now questioning their basic level of thoughtfulness.

    P.s. (leave second para in, Deep, it’s relevant, I’m not impugning their manhood*, I’m commenting on the thought patterns like Willard**…might…except he knows formal philosophy and…I…um…don’t.)

    *Although there is almost a physical aggression joy in squashing them in words.

    **Actually I don’t know for sure if he would be interested. He is big on how we express things for rhetorical games or have flaws in logic. I’m not sure if he is or is not interested in how people actually think. I am…

    • Gavin's Pussycat

      TCO, James Annan seems to think you’re holding the right end of the stick on this… congrats. So do I, to the extent and in the measure that I can follow your verbal gyrations :-)

      Yes, thought processes are interesting…

  28. Gavin's Pussycat

    Lazar, true.

    Still, I think a “comment” on this paper should be based around including all these newer corrected satellite temperature data sets besides UAH and RSS, as per Tamino’s linked post. The most spectacular thing to do would be to take MMH’s software, just add these new data sets to the panel, and run it, warts and all. I would bet that while the mean of the observations shifts upward to near the ensemble mean of the models, the uncertainty of the mean of observations would be hardly reduced at all: these time series are very strongly correlated as they all contain the same forced and unforced natural variations, see their Figure. Only their long-term trends are different.

    I mean, why would MMH object to applying their method to an extended data set, right? The same argument the ‘auditors’ used against Santer ;-)

  29. TCO was first in, and I’m second.

    I totally agree regarding the newer sets (this discussion came up after Klotzbach et al). That’s why I asked about Fu et al – the link at Tamino’s data page set is broken, but presumably it’s available somewhere else.

    V+G only goes up to 2006 (I think) and apparently has some issues (they got their own caveat section in AR4, IIRC). So it would be great if the Zou team could get a complete LT product out. I’m thinking of getting in touch with them to see what the plan is.

  30. Here is the part of remarks I made at James Annan’s that may be relevant here:

    Santer et al ran a similar test of observations against the ensemble mean, corrected from the abysmal Douglas et al (Santer’s H2). But they also did a pairwise analyis (H1) that seems roughly equivalent to the ensemble spread approach.

    MMH purport to do a better, cleaner and more up-to-date H2 than Santer. Although I’m convinced some of the details have been mishandled, their findings are not surprising given that tropical tropospheric trends went down in the observations and up in the models for 1979-2009 relative to 1979-1999 (plus more d.o.f.)

    I agree with you that the MMH (and Santer H2) analysis is misplaced. But one also has to ask why MMH went backward and didn’t consider ensemble spread or pairwise analysis, even though that had been done by Santer et al (as Gavin Schmidt pointed out in comments).

    • Ah, but DC, Gavin was wrong. You know, Tom Wigley said so in an e-mail…or so McIntyre was trying to make us believe…

  31. > I’m commenting on the thought patterns like Willard**…might…except he knows formal philosophy and…I…um…don’t

    Actually, TCO, I prefer to observe “speech behaviors”, so to speak. What matters to me, first in foremost, in these debates, is how things are done with words. So the patterns I try to describe are mostly related to discourse, not [thought patterns].

    What is meant might be relevant, of course. But it’s tough to do more than hypothesize, unless, of course, you can check out your interpretation by actually asking that person. And even then, how can you be sure that a person knows what that person means?

    Also, I try not to express to harsh a judgement about character, integrity, competence, etc. My interests should speak by themselves. I never understood why I should interest myself in nonsensical or plainly outrageing people.

    Steve is Steve. He sure can make mistakes. His interests can seem biased too. And he has a strange way to pick friends. So be it.

    There is no need for me to evaluate his integrity nor his competence. At least some of what he does will be for the good, in the end. For instance, everybody knows that peer-reviewed litterature sucks and that the Internet will revolutionize it.

    He still an interesting person to me. At the very least, I find his way to promote his arguments truly fascinating. I am sure there is a dig there, be him right or wrong or in between, as anyone of us.

    So I dig.

    [DC: Added the missing words]

  32. A sentence is missing its end:

    > So the patterns I try to describe are mostly related to discourse, not [thought patterns.]

    And outrageing is outrageous, of course.

  33. PolyisTCOandbanned

    Willard: I agree that the speech patterns are both intersting and* interesting to you.

    I agree that we can’t understand for sure or study easily people’s thought patterns. Yet I feel it very interesting and one of these things that as one gets older and sees patterns in work and education and self and others that one can start to think about.

    Also that the Internetz are a perfectly appropriate place to speculate and cogitate and pose ideas on wrt such a non-trivial question.

    I also think that Lance Armstrong is dirty and I was one who thought it a year ago. Plus I think most NFL players are juiced. And I sort of reject the “innocent until proven guilty” idea in terms of how we view the world. It’s good for a court of law…but in real life, speculating on things that are in question is fun.

    Yes…yes it is outrageous. But my hope is that it can be positive in the end. Like a drill sergeant’s harshness has a beneficial effect (at least in terms of transforming the recruit). This is not to say that I am not evil. Just that I hope there is some good coming along with the evilness that is indulged.

    *Although maybe a modern philosopher would debate the distinction!

  34. TCO:

    I also think that Lance Armstrong is dirty and I was one who thought it a year ago.

    Only a year ago? Wow.

    Maybe this explains why you were so slow to accept the fact that McIntyre is also, as one might say, a bit dirty … :)

    (and I’m just teasing, really, I am, but Armstrong was called out by Greg LeMonde, the first american to win the tour de france, many years ago.)

    I think most of us reject “innocent until proven guilty” in many areas of life. I certainly never accepted that, say, McIntyre’s innocent of the ideological bias which is nearly universal in the mining industry. As soon as I heard of him and his background, I was quite certain that he was on a witch hunt to discredit climate science by any means possible.

  35. PolyisTCOandbanned

    I was onto McI within a few months:

    Failure to respond to tough questions, equivacation. The pompous Internet Latin and amateur lawyering. Failure to quantify criticisms. Failure to report tests that showed a trial of his opponents work not hurting the results (that must happen sometimes!). Failure to make clear criticisms and to publish. All that stuff bothered me.

    I don’t mind wandering into a field with a strong bias, but a real scientist, a real honest thinker is open to letting the learnings he gets tug him a different way. I don’t sense any such ability of McI to learn that way. The only “aha”s he gets are those that hurt his opponent. Just by probability, by how you become more sophisticated with learning a new field would expect some ahas to go the other way for his viewpoint to evolve were he really intellectually curious/honest.

    I do think that the science itself is fascinating and even some of his attempts to find issues. Love the grass plots of trees for instance. Shame, haven’t seen one of those in a long while. but he never seems to really drive much to completed understanding (and I’d still be fine if he did so and magnitudes were small or Mike proved right.) It would still be intellectually fun and a cool way to learn. I think he’s capable of fascinating approaches as a sort of amateur detective. But it never gets finished off…even to show the approach a dry end. Oh well…

  36. Response to AMac regarding testing “the models” with an ensemble.

  37. “I don’t mind wandering into a field with a strong bias, but a real scientist, a real honest thinker is open to letting the learnings he gets tug him a different way. I don’t sense any such ability of McI to learn that way. The only “aha”s he gets are those that hurt his opponent. Just by probability, by how you become more sophisticated with learning a new field would expect some ahas to go the other way for his viewpoint to evolve were he really intellectually curious/honest.”

    TCO I do believe you’re mellowing in old age…

    Yes, someone with a truly open / scientific mind would be more able/willing to go where the data and evidence — all of it — takes him or her, revising their position when needed. This is why I would argue that at this point in time, the truly scientific open mind would support the consensus on AGW. That’s simply where the bulk of the evidence leads.

    True skeptics want more evidence before deciding conclusively. You can tell the true skeptics from deniers because true skeptics consider all evidence regardless of what side it supports. Deniers don’t care about the evidence — they have already decided, but did so based not on the evidence but on interest, personal, political or economic, and now it’s just a matter of finding the evidence to prop up that decision. Note I think there are some AGW supporters who are similar to deniers in that they have decided on AGW but largely on the basis of ideology or economic interest.

    A true skeptic won’t focus solely on debunking one side or the other, or play a game of ‘gotcha’ whenever some new piece of evidence comes up or whenever the graph jogs up or down in a way that temporarily supports their position, which is why I have a hard time labeling some bloggers as “skeptics”. Those people are properly deniers. I used to include only those who denied for economic reasons, but now I include those who do so for political or ideological reasons.

  38. Gavin’s Pussycat,

    Agreed that minimal selection of observational series may be a flaw…

    CCSP report (Karl 2006); 8 series
    Douglass et al. (2007); 10 series
    Santer et al. (2008); 14 series
    McKitrick et al. (2010); 4 series

    MMH use two radiosonde series, see Fig. 6 from Santer et al., that probably ain’t enough.

    The CCSP report;
    * HadCRUT
    * NOAA
    * HadAT2
    * RATPAC
    Satellite analyses;
    * RSS
    * UAH
    * UMd

    Douglass et al.;
    * HadCRUT
    * NOAA
    * HadAT2
    * IGRA
    * RATPAK
    Satellite analyses;
    * RSS
    * UAH
    * UMd

    Santer et al.;
    * ERSST-v2
    * ERSST-v3
    * HadISST
    * HadCRUT
    * HadAT2
    * IUK
    * RAOBCORE-v1.2
    * RAOBCORE-v1.3
    * RAOBCORE-v1.4
    * RATPAC
    * RICH
    Satellite analyses;
    * RSS
    * UAH
    * UMd

    McKitrick et al.;
    * HadAT
    * RICH
    Satellite analyses;
    * RSS
    * UAH

  39. Gavin's Pussycat

    Discussion of McShane and Wyner over at shewonk‘s place.

  40. There are a *lot* of problems with McShane and Wyner.

    First there are the obvious problems with sections 1 and 2, where M&W demonstrate an abysmal grasp of paleoclimatology in an exposition based on dimly understood and hilariously misinterpreted portions of MM and the Wegman Report.

    Section 3 is probably the low point where the authors use a toy strawman model (Lasso) to “prove” that random noise will validate within the instrumental period as well or better than the actual proxies from Mann et al 08.

    There are many issues glossed over here. Off the top of my head:
    - M&W complain about the short verification windows, and yet have shortened them from 50 years to 30!
    - It’s not clear to me yet what relationship is assumed between proxies and temperature. Is it assumed to be positive? Or whatever pops out?

    Either way it’s problematic because the real recon model assumes different a priori relationships for different proxies. You’ve got to wonder how their random “proxies” would have compared to the real proxies if they had each been run through a real validation engine that does a real screening/mini-reconstruction within the instrumental sub-window.

    On section 4 (the actual reconstruction), as many have already pointed out, the choice of k=10 PCs is absurd. There are only 90 or so proxies back to 1000! I believe this issue has been covered in the literature.

    And as RN points out at SheWonk, it is now standard to benchmark recon models/methodologies against pseudo-proxies generated from a GCM based temperature record. This allows one to judge the performance of a methodology where the “correct” answer is already known.

    I doubt M&W are actually aware of any of this, though – I don’t think they have even read Mea 2008.

    (And, yes, I’m doing a post on this later today. Stay tuned.)

    • I doubt M&W are actually aware of any of this, though – I don’t think they have even read Mea 2008.

      Section 3.3 covers “pseudo-proxies”. They manage to completely miss the point of pseudoproxy tests. Their references include Christiansen et al 2009, Lee et al 2008, Mann et al 2005/2007, and von Storch et al 2004. Did they actually read any of these papers?

  41. How come a statstics journal would publish such a paper if the flaws are this obvious to someone who knows the topic? (Or aren’t they?)

  42. Figure 17: They’re comparing their annual errors to the decadal errors in Mann et al. So it’s not surprising that their credible intervals are so much wider.

    • correction: decadal should read multi-decadal.

      From Mann et al 2008: “All series have been smoothed with a 40-year low-pass filter as in ref 33. Confidence intervals have been reduced to account for smoothing.”

    • Yes, that does appear to be the case, although it seems MBH98 did use annual errors.

      Mann et al 2008 followed the NRC recommendation to no longer evaluate error at the annual or decadal level; hence, no discussion of the “warmest year” or “warmest decade”.

    • Gavin's Pussycat

      Hmmm, what M&W use is “path errors”. Not sure how those are to be interpreted — will depend on their beta11 and beta12.

  43. > How come a statstics journal would publish such a paper

    We’re all still looking at a draft that’s online:
    Note the typo in the title there: “… Surgace Temperature …”
    The copy editing clearly hasn’t been done yet.

    • Hank,
      It looks like a final “in press” version to me. Surely the only changes from here will be non-substantive (copy editing and formatting for print).

      Also the typo you point out is not in the article itself. Interestingly, once it gets published it will go behind the pay wall (new policy as of 2008).

    • According to

      The paper has been accepted at the Annals of Applied Statistics and a draft version is posted on the journal’s website in the forthcoming section. The posted draft was submitted for referee and editor comments and is not yet in “final” form. Likewise, some have obtained the code and data which was intended for the referees and editors as part of the review process. This code and data is not yet in final form nor is the documentation complete. The final draft of the paper and the code and data bank will be posted at the journal’s website come publication.

      [DC: I stand corrected then. But if the paper is truly accepted it should exist in more or less final form, even if we can't see it. And presumably there are no differences in the main findings that would affect the abstract. ]

  44. Upthread, I said:

    (And, yes, I’m doing a post on this later today. Stay tuned.)

    It’s taking a little longer than I would like, but the post on McShane and Wyner is on its way.

    Meanwhile, there is also discussion at Deltoid and ClimateProgress. Both emphasize that the M&W reconstruction is still very much a hockey stick.

    • Gavin's Pussycat

      Looking forward to your post… I think you were right to question the choice of ten PCs… they are letting too much noisy high PCs in. About the issue of local vs. global/hemispherical calibration, I now feel that is a red herring: due to the linearity of the whole multiregression/RegEM computation scheme it shouldn’t matter if otherwise done correctly. I think regularization is key.

    • Gavin's Pussycat

      …and about Bayesian, especially when using that it would be straightforward to search for the best (= narrowest uncertainties) number of PCs. No reason or excuse for using indirect methods. Same with verification in general: the posterior is the sum total of your knowledge and you can extract whatever you’re interested in.

  45. Hi folks.

    I’m a bit confused about something, and I’m sure there are people here who can help:

    TSI is often quoted as being about 1366W/m². As I understand it, this is the total solar radiation passing through a square metre facing the sun at the top of the Earth’s atmosphere. This must be reduced somewhat if you’re measuring the same thing at the Earth’s surface, and reduced again if you’re averaging over the whole planet, over all the seasons, over day and night and so on – I’ve seen the figure of 240W/m² quoted for this kind of measurement. What I’m confused about is whether the ~1.6W/m² cited as the current anthropogenic forcing relates to the 1366 figure or the 240 figure or something else entirely – does it represent 0.1% of solar energy or 0.66% or some other value? Is there somewhere I can get a reasonable understanding about this without doing a 3-year course in climate science? :-)


    • Icarus:
      Since nobody who actually knows what they are talking about has yet answered your question, allow me to try. ;-)

      The second figure (240 W/sq m) is derived by dividing the first figure (1360 W/sq m) by 4 (which is the area of a sphere divided by the area of its shadow) multiplied by 1 minus the Earth’s albedo (about 0.67). Therefore 240 W/sq m is what the Earth gets form the sun, averaged over the whole surface, day and night. So, yes, the anthropogenic forcing should be compared to the smaller number if you want to figure out some kind of proportional change.

      I would recommend David Archer’s “Global Warming: Understanding the Forecast” as a very good and well-written introductory text. One negative about this book is that there are a lot of minor errors but there is a list of errata on Archer’s website and I think there’s a new edition of the book due out soon.

    • Guillaume Tell

      Alternatively, you could watch David Archer do the derivation at the chalkboard, in the video lecture series of his book, “Global Warming: Understanding the Forecast.”

    • That’s great, thanks to both of you.

  46. PolyisTCOandbanned

    I don’t know if this is too recursive, but I’ve totally dialled the “Neverending Audit” into my thinking and comments, as a term. It’s such an apt way to describe the interminable tease that McI has treated us to. For years and years and yearz…

  47. PolyisTCOandbanned

    I think the most interesting part of the paper is the failure of the proxies to do a good job predicting short intervals of instrumental temperature. You can back this up and defend (as some Team defenders do) by saying “of course” we want to look where the rise is. But then you put yourself in the box of essentailly relying on a degree or two of freedom (matching one or two long trends,) vice a lot of wiggle matching. Given that climate does have a fair amount of year to year variation (El Nino, for example), it would be better to prove ability to wiggle match as it seems like you ought to be able to. this really cuts both ways…you can say McShane should not expect to be able to show this can of validation, but then you put yourself in the box of having proxies that are not (much) better than (mildly) sophisticated noise.

    Both Mann and McShane approaches would seem to be made worse from the use of global temp corrleations vice local only (for the proxies).

    the issue of global field (vice global average) regressions is one that I don’t understand cimpletely (just technically, I don’t), but intutitively would think of it as an intermediate case of pure local versus pure global average. Hence sharing some of the same tradeoffs and issues.

    • Actually they do show that the proxies are much better at actually predicting temperatures. At least that’s what I get from the fact that they perform better than the (not-really-)”random” on the extremities, where only one endpoint is available.

      What proxies can’t do better than a bunch of random processes, is interpolating a smooth function over a hidden middle chunk – after being thoroughly selected and weighted to fit over the rest of the data. I.e. if I delete a short chunk in the middle of the function, you’ll be able to “guesstimate” the removed chunk just as well as the proxies, just by eyeball-interpolating. But if I only give you one end of the curve and ask you to guesstimate the rest, the proxies will beat you.

      In other words they have shown that proxies (when used within their own Lasso method) have poor high-frequency resolution. Did the Team ever say otherwise? Does the claim of exceptional warmth in the last decade depend on this assumption?

      The strange bit is when the apparent superiority of proxies at actual prediction is held against climate scientists. Apparently, since proxies are better at the extremities, it means that we shouldn’t use extremities for validation. Hm, OK, except for the fact that behaviour on extremities (i.e. actual prediction) is precisely what they’re interested in, and what they want to test, right?

      I also fail to understand their earlier section on the Lasso method. To me it sounds like, “we show that the Lasso method is prone to overfitting, which is clearly a problem with the proxies”. Sorry, come again?

      To me it looks a bit like a McLean et al. situation. The technical facts are apparently correct and perhaps interesting. The interpretation is, well, debatable. Of course, it’s just as likely that I completely failed to understand the paper, so I’ll wait for people who actually know their stuff to chime in.

    • Take a good look at the actual end points of this chart (from fig. 10). It shows AR(.4) null proxy in black vs. the real proxies in red.

      In the first and last window (i.e. very first and very last point in the chart), the real proxies perform no better than the “null” proxy (well within the spread of the “null” proxy).

      Think about that.

    • I suspect this has something to do with their use of the Lasso method.

      Note that with the Lasso, proxies perform significantly worse than noise that is designed to be similar to proxies (empirical AR1). It’s not due to the autocorrelation structure being more complicated than the e-AR1 series, because the results show interpolation skill improves as the noise becomes more structured.

      Perhaps the proxies score worse because of the temperature signal, not in spite of it?

      I suspect that the Lasso is picking a very small number of the best proxies, and picking a slightly larger number of noise series. This would mean that the proxy interpolations have higher variance and therefore higher RMSE — the old bias/variance trade-off.

    • I’ve run some of Dr McShane’s code, and the proxy-based reconstruction does in fact have more variance than the noise-based reconstruction. An estimator with higher variance will have higher RMSE — this explains M&W’s results.

      The reason is slightly different from my earlier guess. The proxies contain a temperature signal, so the lasso can easily overfit the proxies without too much penalty (see Figure 8). A close fit from, for example, the empirical-AR1 noise would result in a higher penalty term, so the lasso ends up smoothing out the noise-based reconstruction: voilà, lower variance.

  48. PolyTCO said: ” It’s such an apt way to describe the interminable tease that McI has treated us to. For years and years and yearz…”

    And that is precisely what has given the denial machine its excuse for maybe having some substance to it for all of this time.
    All the other myriad transient froths (faithfully catalogued by Watts et al) it concocts are here today gone tomorrow (or as soon as critical eyes are cast upon them) but McinTyre’s long tease gives the impression there’s some kind of ‘there’ there.
    Of course there isn’t, but it’s enough to hang an inactivist policy on while the world burns.

  49. PolyisTCOandbanned

    I’m working to create a meme that says thinking Republicans should ignore the guy. Pointing out that he has spent 5 years between papers, never finishes stuff, throws crap agains the wall, etc. is useful here. I don’t buy the oil funding chimera and it has no traction anyhow.

    And I don’t mind if the world burns. I just want honest analysis. Honest.

    We gotta disconnect assessment from policy urging. Someone like Mike who hangs out on DKoss and gives a lot of Gore style policy lectures gives me the willies to be doing basic science as well. I’d rather have a stone-face who says, I don’t care if you eff up the planet. I’m just reporting the info. I like that better. It’s part of why I think even people like Zorita and Curry are going down the wrong train with communication and outreach and policy advise and all. They should do the basic science. There are others with the skillset to translate it and advise on policy. they should stay out.

  50. I do wish people would stop saying chimera when they mean mirage.

  51. I disagree, but only somewhat PolyTCO.

    It’s the reason academics are often derided for living in ivory towers, when to an extent that is what’s required to pursue the work without the pressures and distractions of the demands of policy and other intrusions of the real world. I can agree with that.

    But is that what McI’s doing? To paraphrase that book title, the near continuous output of quasi-respectable doubt merchandise appears to be his end product. And in providing that, he does leave the actual deployment and implementation of his merchandise to those closer to the ears of the policy makers. I’m thinking of Montford’s recent selective, Steve-coached, rewrite of history here, and his ties to the GWPF.
    This isn’t all happening in a vacuum.

  52. I agree that there are two roles needed here, the scientist who is dedicated to digging out information, and the scientist who is to evangelize the ideas and concepts.
    We have this same dichotomy where I work. I am involved in overseeing the technical design and details (ie inward looking), while my co-worker at my level is responsible for taking the concepts and ideas outside the company to sell it to the public, and to other companies.
    Climate change needs a Carl Sagan who can take the message to the people and the government as his sole purpose, and leave the research to the experts in the field

  53. McI’s purpose is to sow FUD. Just like on WUWT with its overwhelming plethora of articles, the purpose is not to get to the scientific truth, but to create uncertainty about the data. Just as was done with the ozone hole and tobacco. Even if beaten down with the truth, the flood of headlines is damaging.

    Its not the first time FUD has been used nor the last nor just in the scientific field (Microsofts FUD campain vs Linux springs to mind)

  54. As to the motive that McI has for sowing FUD, that is open to speculation. I really hate to say what any persons motive is in what they do. They have to explain it themselves.

    However a plausible motive would be that he is an ideologue whose Libertarian views were known to his friends (squash players?) and was recruited by certain think tanks in the U.S. to help spread the FUD.

    • “would be that he is an ideologue whose Libertarian views were known to his friends ”

      In all fairness, he’s not a libertarian and said so to the Heartland audience this year. IIRC he said he actually disagrees with libertarianism. It didn’t go down well.

  55. I’ve been spending some time on “climatechangefraud”, and OMIGOD are the regulars there *DUMB*. Wading through endless repetitions of all the usual denier memes was just amazing. These guys haven’t learned any new tricks at all.

    The idiocy is strong in them, fer shure!

    • Pseudonym redacted

      [DC: Deleted. Please select a less abusive pseudonym. Also please avoid ad hominem attacks. Thanks! ]

  56. I’ve been spending some time on “climatechangefraud”…

    There’s only one thing that can save you – Deepclimate’s upcoming post on the b-school debunking of paleoclimatology!

    Hang in there, derecho!

    • Eh, it’s like the old fish-in-barrel target practice. Hardly worth the effort. Kind of a kiddie WTFWT, really.

  57. Here’s a couple of recent links from deniosaurs:

    Andrew Revkin about a new paper:

    McKitrick et al final version Aug 3 2010: “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Series”

    Have these been examined in detail anywhere?

  58. >>”How come a statistics journal would publish such a paper if the flaws are this obvious to someone who knows the topic?”

    Because the people pointing out the flaws are pretend blog scientists.

  59. That’s a good example. There’s no way anyone, real scientist or blog scientist ™ has analyzed this pre-publication version of this paper in enough depth to have any valid criticisms at this point.

    Most people commenting on this paper have no ability to understand it whatsoever, and those few that do and have commented so far are simply in a rush to affirm or condemn it to please their mouth-breathing blog readers.

    [DC: Have you read the Zorita piece? it's pretty devastating critique from a scientist normally considered on the lukewarmer side. And I assure you he understood it very well.]

    • Skip, I echo DC here: have you read Zorita’s piece?
      He’s identified numerous errors in their narrative, and pointed out that they used methods that are not used in the paleoclimate community. Others have pointed out several other issues. For a paper that supposedly is a thorough analysis of the methodology used by paleoclimate scientists, it surely knows little about the methodology used by those paleoclimate scientists.

      That Zorita just wants to “please his mouth-breathing blog readers” is pretty funny, considered the (in my opinion outrageous) attack Zorita led on Phil Jones and others after the UEA e-mails went public. Skip Smith, the conclusion is obvious: you *want* the paper to be right, because it appears to support Steve McIntyre’s criticism.

  60. PolyisTCOandbanned

    I’m bored with the A@W paper. Could we please rip MMH a little more?

  61. >>”He’s identified numerous errors in their narrative, and pointed out that they used methods that are not used in the paleoclimate community. ”

    Some of the “errors” are actually in print in climate journals, such as using air bubbles to determine past temperature in ice cores. The point of the paper wasn’t to replicate the methods used by paleoclimatologists, but to examine their claims. Just because the method is novel (to you) doesn’t make it wrong. But who cares? That paper criticizes someone on “our side” and so must be destroyed immediately — no time to read and digest it.

    >>”Skip Smith, the conclusion is obvious: you *want* the paper to be right, because it appears to support Steve McIntyre’s criticism.”<<

    And here we get to the crux of the problem with blog science — it's about rooting for your side, not being honest. I didn't say anything complimentary about this paper, Steve McIntyre, or anyone else on the "wrong side," but because I'm critical of the laughably predictable responses from everyone involved, and I posted here, I *must* be the enemy.

    I posted a similar comment on Climate Audit as well, but they seemed a lot less defensive:

    • I’ll gladly take back my comment about you and McIntyre, but…

      Care to cite a paper that uses air bubbles in ice cores to determine the temperature? It would be a method I have not heard before. AFAIK, temperature is determined from the isotope distribution in the condensate (aka ice).

      Moreover, you are not just critical of the “laughably predictable responses”, you are critical of the responses that are critical of the paper, without indicating WHY that criticism supposedly is wrong (it being wrong is what you imply). Your only argument is “you cannot have understood it so fast”, which is handwaving.

      Let’s also reiterate, as Eduardo Zorita also notes, that M&W indicate several times that “MBH did A”, and then do some analysis based on that assumption, but where the assumption is wrong! That comes down to suggesting M&W are evaluating a published method, while in reality they do not. This is a crucial issue when they then use “novel methods”: you’ll have to explain why the novel method is superior to the old method. M&W did not do that, as they criticised methods that were not used. They thus criticised ‘strawmen’.

      Many people picked this issue up, it really doesn’t take much time when you know the field (as DC does, as Eduardo Zorita does, as TCO does, as Pete does, etc. etc. etc.). Give me a paper in my field, and within 1 hour I can point out flaws. Give me a flawed paper, and within a day I will pick up several of those flaws and indicate what they mean for the paper. A simple matter of experience in the field does that. Oh, and a skeptical mind…

    • How do climatologists use air bubbles to work out historical temperatures? Inquiring minds want to know!

  62. PolyisTCOandbanned

    I’m an honest blog science advocate. I’ve found CA can be pretty defensive as well. At least you can get in there for a while and make comments because of the lack of pre-moderation. But Steve will clamp down on disucssion if the bleeding for his side gets too great.

    • One tiny problem, Skip: according to M&W it’s done using isotopes of oxygen and hydrogen…so, I’m still waiting for a reference.

  63. University of Chicago has published videos of its global warming classes on Youtube. Very accessible now.

    PHSC 13400: Global Warming

  64. Rattus Norvegicus

    Ross and the FP are up to their old “tricks” again. Most if not all of his claims have been debunked here, but you might want to review them again.

  65. Marco, as you requested I cited a paper that uses air bubbles in ice cores to determine the temperature. Rather than acknowlege that M&W were not wrong on this point, you have moved the goalposts to criticize M&W on new grounds. This is typical blog science ™, and I have no interest in this p***ing match. Feel free to believe what you like.

    • Sorry, Skip, I was doing something I have done before, and which is very useful in exposing ignorant people: to ask something where I already know the answer (I know Severinghaus’ work).

      Your reaction is interesting: as noted, M&W wrote something which contained one correct claim (temperature from isotope distribution in the air bubbles), but unfortunately followed that by the wrong claim that it’s isotopes of hydrogen and oxygen. It is interesting you get all upset when I point this out. It’s not criticism on new grounds, it’s part of the same claim they made. Their description is wrong, and shows they do not know basic elements of climate science. It’s a pattern in their paper, and Eduardo Zorita found quite a few examples. Is it blog science? If so, then we’ll have to invent a new term for the M&W paper: “not-even-blog-science”.

  66. Rattus Norvegicus

    Skip, you win this one — the article you linked to has been highly cited and by real scientists and not just the denier crowd. The method, AFAICT seems sound modulo the inevitable dating problems.

    However, M&W were wrong in their claims. They obviously did not study and understand the problems associated with which I think is the bigger issue here.

    • Yep, Skip’s insistence that M&W were not wrong on one point is an odd defense against the claim that they’re wrong on the crucial statistical issues.

      The fact that Cromartie has trouble remembering the names of all his children doesn’t mean that he’s not able to play cornerback at an all-pro level for the Jets this year.

  67. >>”Yep, Skip’s insistence that M&W were not wrong on one point is an odd defense against the claim that they’re wrong on the crucial statistical issues.”<<

    I'm not playing the partisan blog game where we all attack or defend papers based on our pre-existing opinions.

    • Skip, care to indicate where the criticism is wrong?

      You see, you keep on claiming the criticism of M&W is (essentially solely) due to pre-existing opinions. Which makes *any* criticism of any paper by your definition “due to pre-existing opinions”. Are you not even remotely willing to accept that the paper, even on cursory reading, already contains easily identifiable errors and issues? With Eduardo Zorita you can’t even complain about pre-existing opinions, since he really doesn’t care how the reconstruction looks like…

  68. “I’m not playing the partisan blog game where we all attack or defend papers based on our pre-existing opinions.”

    But it’s still OK to attack or defend papers based on the content of those papers, right?

    In which case, the attacks against the M&W paper is correct: the paper doesn’t support the conclusion made of it, ignores the science completely, and reads more like a tabloid editorial than a research paper.

  69. Skip, ‘argon and nitrogen’ vs. ‘oxygen and hydrogen’ — how come?

  70. >>”Sorry, Skip, I was doing something I have done before, and which is very useful in exposing ignorant people: to ask something where I already know the answer (I know Severinghaus’ work).”<<

    Oh, of course.

  71. “Oh, of course.”

    And, of course, you didn’t.

    Didn’t even go “OK, my bad” because one thing you can say for deniers is that the don’t “flip-flop”. When they have made a statement they STICK WITH IT. Even if it’s then proven false.

  72. Sorry for the confusion. Next time I’ll use sarcasm tags.

  73. By the way, notice what happened here? I pointed out a mistake in the criticism of M&W, so the troops rally to change the topic, call me a “denier,” and demand that I apologize.

    Typical blog science ™.

    • Sure, Zorita made a mistake. But the M&W’s ice core passage was also mistaken as John Mashey has shown in excruciating detail.

      However the argument about ice cores is peripeheral to Zorita’s main critique, which is that M&W used methodologies not actually employed in paleoclimatology. I’ve made a similar critique and have shown that M&W’s findings concerning reconstruction validation do not apply to the actual methodologies used in paleoclimatology, specifically in Mann et al 2008.

      M&W also features highly questionable scholarship – see the M&W post and latest comments for more.

  74. Tea Party seeks candidates who say no to global warming and gay marriage

    The email…

    From: Jon Morrow
    Date: Tue, 24 Aug 2010 11:14:38 -0400
    Subject: Tea Party Voter Guide and Questionaire…get your candidates on it
    2. The regulation of Carbon Dioxide in our atmosphere should be left to God and not government and I oppose all measures of Cap and Trade as well as the teaching of global warming theory in our schools.


  75. “By the way, notice what happened here? I pointed out a mistake in the criticism of M&W”

    Wait while I get out the world’s Smallest Violin.

    Oh, hang on, no, I don’t have to. Skip has skipped quite a lot of what happened:

    ‘argon and nitrogen’ vs. ‘oxygen and hydrogen’

    “I’m not playing the partisan blog game where we all attack or defend papers based on our pre-existing opinions.” (after having done just that as we can see…)

    “And here we get to the crux of the problem with blog science — it’s about rooting for your side, not being honest. ”

    “There’s no way anyone, real scientist or blog scientist ™ has analyzed this pre-publication version of this paper in enough depth to have any valid criticisms at this point”

    “Most people commenting on this paper have no ability to understand it whatsoever,”

    “Because the people pointing out the flaws are pretend blog scientists.”

    And so it tirelessly goes on.

    Skip, my violin remains unplayed. Revisionism is the soul of theology…

  76. JB Did you notice number 15?

    “15. I advocate moving our currency to a debt free supply-side labor based currency.”
    Read this out loud and the man wanted to know who was quoting Karl Marx. This looks to be lifted from “The Labour Theory of Value” IHHO or Das Kapital for those unfamiliar with the man’s oeuvre. (I don’t think it is really but I don’t care enough to look it up. I’d rather have fun.)

    Teabaggers are Marxists! Can I write the headline now?

    • “Teabaggers are Marxists! Can I write the headline now?”

      Sarah palin must be. She’s the one who saw to it that some of the oil wealth was redistributed to Alaskan citizens ($3,000+ pa?), isn’t she?

  77. Deep Climate, I agree the minor mistake by Zorita isn’t very important or interesting.

    However, the extreme hostility provoked by my pointing it out is fascinating.

  78. Skip Smith,
    First, I disagree that what you encountered could be characterized as “extreme hostility”, especially when compared to what happens at the “blog science” websites like WUWT and CA.

    Second, you haven’t made any substantive commentary on McShane and Wyner, or addressed the substantive criticisms put forth by myself or others, except to point out what you now admit is a minor mistake – from a “lukewarmer” scientist, at that.

    Third, you state:

    “I’m not playing the partisan blog game where we all attack or defend papers based on our pre-existing opinions.”

    Do you still think that MBH98/99 “hockey stick” graph and the “spaghetti graphs” in TAR and AR4 were “fraudulent”, despite complete lack of evidence for the assertions underlying the claim? Your comments on this issue at Scholars and Rogues, for example, sounded pretty partisan.

    Comment 29 ff:
    Skip Smith, June 10, 2010 at 11:12 pm :

    I’d like to know if Brian now understands how the “trick” was done, and if that changes his assessment of whether the hockey stick graph was fraudulent.

  79. That post at Scholars and Rogues was a response to a statement by Brian that said:

    “If the scientists had actually substituted or replaced the tree ring proxy data with instrument data, then McIntyre and Fuller would have a valid claim of fraudulent behavior by Phil Jones et al. However, nothing was substituted or replaced.”

    Several people pointed out that is in fact what happened on at least some of the “hockey stick” graphs, so I asked Brian if he was changing his position in response to this new information.

  80. Allow a lurker to add a comment or two….Skip’s complaints of “hostility,” is similar to other complaints I have seen on RC by deniers (here is where I would link to Monckton’s responses in “Monckton Makes it Up” if I could) as well as Judith Curry (not sure what description is appropriate right now for her?). They claim that there are numerous “ad hominem” (in quotes as this term seems to be the most inaccurately used term of late) attacks, insults, etc. I actually think there is a concerted effort by deniers to appear to be nice and polite (i.e. adding “please” and “thank you,” but only for window dressing), just so it can be pointed out that they are nicer. While I agree that there could be friendlier responses, I am not sure why they are so sensitive, nor what the point is of winning the niceness battle ? If only Wolfgang Pauli were available to respond to inaccurate comments, one wonders how quickly tears would be shed. Skip, make sure you reread your initial comment:
    “Because the people pointing out the flaws are pretend blog scientists.”
    And you want to point out that there has been hostility since? If you are going to pretend to be on the High Road, you should pretend from the start, it is much more believable that way.

  81. PolyisTCOandbanned

    Notice that McIntyre is starting a new sea temperature investigation. This while he locked his MMH messup (erasing his mistake, stopping discussion on it (“while in Italy”) and promising to fix his mistake…err…sometime.)

  82. Quick OT question @PolyisTCOandbanned


    Total Cost of Ownership?
    Taken Care Of??
    The Chosen One???
    Terry’s Chocolate Orange????

  83. It’s telling that Cfox labels me a “denier” based on this conversation. Anyone that doesn’t toe the party line is the enemy.

    • Gavin's Pussycat

      All it tells me is that Cfox is fluent in English. That you want to derail this ‘discussion’ away from substance into terminology, tells even more. If you don’t want to be called a denier, don’t be one.

    • > If you don’t want to be called a denier, don’t be one.

      Spot the fallacy.

  84. Skip Smith wrote:

    It’s telling that Cfox labels me a “denier” based on this conversation. Anyone that doesn’t toe the party line is the enemy.

    Skip, I find it rather telling that you didn’t quote where Cfox “labels” you a “denier”.

    Cfox has simply been watching. Reading. Not actually participating until he chose to make some observations about your style of argumentation. As he put it, he’s been a “lurker”.

    And after generally watching this exchange for a while and how you operate in particular, Cfox states:

    Skip’s complaints of “hostility,” is similar to other complaints I have seen on RC by deniers (here is where I would link to Monckton’s responses in “Monckton Makes it Up” if I could) as well as Judith Curry (not sure what description is appropriate right now for her?).

    Did he actually call you a denier? No. Did he point to evidence that you are a denier? Yes. Circumstantial evidence? Yes.

    But it isn’t conclusive. [DC: Edited - please stay civil.] So he didn’t actually conclude that you were a denier.

    But I find it rather telling that you accuse him of calling you a “denier” without actually quoting him — where to quote him would be to indicate the basis that he would have for concluding that you are a denier if he were draw such a conclusion — and at the same time that he hadn’t actually called you one. When lawyers try to get their clients off with an insanity defense the last thing they want is for evidence to turn up that their clients tried to conceal the crime.
    Regarding deniers, Cfact goes on to state:

    They claim that there are numerous “ad hominem” (in quotes as this term seems to be the most inaccurately used term of late) attacks, insults, etc.

    Seems to me that your response to Cfact is just such a case in point. He points out that your behavior is similar to that which is common among denialists. He points out that your react defensively. He points out that when someone disagrees with you that you get insulting and claim that they are attacking you, that they claim you are a denialist and so forth.

    And your response?

    You state:

    It’s telling that Cfox labels me a “denier” based on this conversation. Anyone that doesn’t toe the party line is the enemy.

    Cfact even points out that when you arrived the very first sentence you typed indicated that you had a really chip on your shoulder:

    Because the people pointing out the flaws are pretend blog scientists.

    Lets examine that a little more closely…

    Sascha had written:

    How come a statistics journal would publish such a paper if the flaws are this obvious to someone who knows the topic? (Or aren’t they?)

    You focused on the first part, “How come a statistics journal would publish such a paper if the flaws are this obvious to someone who knows the topic?” and ignored the second, “(or aren’t they?)” then you responded:

    Because the people pointing out the flaws are pretend blog scientists.

    You aren’t focusing on their arguments or premises — you are simply inslulting them, claiming that they aren’t objective. And you ignore the second question by Sascha that suggests genuine curiosity rather than a preordained conclusion. Furthermore, you are dismissing all current criticism of the paper by simply dismissing those who are making the criticisms as pretenders engaged in “blog science.”
    Then in your second comment you go on to state:

    There’s no way anyone, real scientist or blog scientist ™ has analyzed this pre-publication version of this paper in enough depth to have any valid criticisms at this point.

    So at this point, without analyzing anyone’s arguments you have already concluded that they are invalid. Once again you sidestep any examination of the criticisms to see whether or not they are warranted or even asking someone how they arrived at a given conclusion or what justification they have for a given premise and claim, based upon an argument from incredulity which is common, for example, in creationist literature to the effect of, “I can’t believe that there is a natural explanation for the origin of the eye, therefore the origin of the eye must be supernatural.” But in this case you can’t imagine someone arriving at a valid critique of the paper as of yet, therefore any critique at this point must be invalid.
    Then you go on to state that those who have the ability to understand the paper:

    … and have commented so far are simply in a rush to affirm or condemn it to please their mouth-breathing blog readers.

    … so rather than examine their arguments you simply dismiss the arguments as some sort of pretense at examining the paper — with the implication that those arguments aren’t worth examining. And you accuse the few who are bright enough to understand the paper of merely playing to a stupid, “mouth-breathing” audience — that is, of hypocrisy, once again without any attempt to examine their arguments.

    I can see three central principles to your methodology: argument from incredulity, ad hominem and playing the victim — by projecting on to others behavior that you are guilty of. The argument from incredulity? I don’t see it that often among climate deniers — although it is somewhat common. It is far more common among evolution deniers. But the other two — ad hominem and playing the victim by means of projection? Quite common among climate deniers.

  85. Only if “Cfox” is a party member trusted with the secret knowledge of where that line is drawn. But has Cfox ever commented here before? Got the decoder ring and the tattoo? I think not.

    Perhaps both your “party” and your “line” are fantasies?

  86. The “ions and isotopes of hydrogen and oxygen” gaffe is explained ,a href=”″>in this post in the M&W thread, followed later by discussion of “Quarternary” a bit later.

    Summary: M&W are learned their paleoclimate from the Wegman Report, rephrased some ideas, included 3 errors in WP, because they didn’t know any better, and used one 1 wording tipoff, the reason I found this in the first place.
    • “Artifacts” is a very odd word usage. People do not normally write of tree-ring growth patterns, coral growths, ice-core data this way.
    • “Ions and isotopes of hydrogen and oxygen” uses the WR’s meaning-changed miscopy of Bradley, as “ions” is not the same as “major ions.”
    • “Speleotherms” is an uncommon misspelling of the standard “speleothems.” . The WR miscopied, MW fixed it, but wrongly.
    • Finally, MW repeats the WR’s misspelling of Bradley’s book as “Quarternary” in place of “Quaternary.”

    Nobody uses isotopes of hydrogen and oxyrgen in ari bubbles, they’re in the ice.

    Then, without citing the WR for another 5 pages, they cite Bradley(1999), as a credible source, but this material *did* not come from there, and I’d guess they never looked at it. Some fo the words hist seem grabbed out of WR tables that came from Bradley, bur sadly, they picked two that had tipoff errors.

    In academe, all this is usually called plagiarism (although not of the extreme word-for-word nature like that in the WR), followed by fabrication of the reference to Bradley.

    Of course, their life won’t be helped by having 10 positive mentions of the WR, and there are some more citations that seem pretty unlikely. But let’s take the M&W discussion over to that other thread.

  87. Rattus Norvegicus

    Ah, I guess that Watts, or is that LaFramboise, has been reading DC, because a charge has been made now that the CLA of a chapter in WG2 “plagiarized” himself (see here).

    The problem is, that if you follow the links, both passages have cites to the underlying research. So, is a striking similarity to the wording you yourself have used in the past with a cite to the underlying research plagiarism? I don’t think so….

    • Hi Rattus,

      Yes, intriguing allegations. Previously I would delve a little further, but could not be bothered anymore. Too many times have I followed up on some grand conspiracy or fraud committed by the IPCC or climate scientists which was revealed by “skeptics” only to find out it was absolute nonsense.

      Anyhow, if someone wishes to go into more detail showing why the claim of plagiarism is (in all likelihood) unfounded I am all for it.

      PS: I love the terminology she uses “climate bible”, “activist” “alarmist” etc., all standard fair for those in denial about ACC/AGW. Got to keep the “skeptics” frothing at the mouths and paranoid , no matter what it takes…..

    • MapeLeaf wrote:

      PS: I love the terminology she uses “climate bible”, “activist” “alarmist” etc., all standard fair for those in denial about ACC/AGW. Got to keep the “skeptics” frothing at the mouths and paranoid , no matter what it takes….

      You are right — that is a major if not the major reason for their using that language — and I really hadn’t thought of it that way before.
      It may also serve to poison the well. And as a big lie which comes to be believed through constant repetition, it is meant to shift the views of those who are not already convinced one way or another. It may make the unconvinced at least consider the possibility that there may be considerable truth in the view.

      It may also be effective in shifting the topic of discussion — possibly getting members of the reality-based community busy defending themselves rather than explaining the science and evidence. Furthermore, if you make those who you are arguing against angry they are likely to think less clearly and argue less effectively.
      But it is no doubt particularly effective with those who already believe what the speaker is claiming. It reinforces the us vs. them view of the world. It stokes anger and resentment. It dehumanizes those that are being argued against. It fosters the illusion that those who think differently are long-haired hippies or chicken-littles whose views, arguments and concerns are not to be taken seriously but simply ridiculed.

      It helps keep “skeptics” all in lockstep. And no doubt it also acts as a membership card — proof that the speaker is a “skeptic” who can be trusted.

  88. PolyisTCOandbanned

    I posted the following at Climate Audit to protest the remaining locking of Steve’s error-bar mistake thread (and his erasing of his mistake). He is now blathering on about Oxburgh investigations and doing a bunch of hoi polloi junk, when he has a math mistake that he is preventing examination and discussion of.

    Note: my comments are not allowed free on Climate Audit. Unlike others there. Another sign that McIntyre likes to control the debate to his advantage by abuse of moderator powers.


    1. I think it’s bizarre that you spend time on this gotcha game investigation fol-de-rol, when you have major outstanding science/math issues. That said it’s your blog and life blablabla. [However, if we have the ability to make free comments, this is MY, relevant, comment.]

    2. What’s really galling is that you ERASED evidence of a mistake of yours (after “publishing” in your blog) and then locked discussion of the mistake. And said that discussion could resume only when you published new analysis.

    3. I’m not saying you have to finish that analysis. I’m not even saying you have to be a part of the discussion. But you should put that previously published mistake BACK UP for the record. And you should unlock the thread an allow discussion. Otherwise this place is just being run as a PR organ, and you’re dishonest, and
    “publishing in the blog” means nothing since you can alter or edit anything, (it’s not a real record like a journal article.)

    4. People like Gerg Burger, Ed Zorita, Judy Curry, etc. should have nothing to do with you. I urge them strongly to stay away from you…have fun with the hoi polloi and the Watts-like comment discussions.

    [Crossposted in AMAC and Deep Climate]

    • TCO,
      Look at the bright side. At least McIntyre admitted his mistake and even put it in bold in the post.

      Rather than rehashing McIntyre’s misbegotten example (a red herring in any case), I’d be more interested in a detailed explanation of the narrow “models” confidence in MMH2010. As far as I can tell it is based on the estimate of the mean trend (the C.I. of which narrows with the number of runs), not on the spread of trends per se. Even then, it still seems tight, probably because the of MMH’s peculiar model-run averaging and weighting scheme.

      There are a lot of other issues in MMH 2010. There are discrepancies between values in various tables, and between table and figures. The stated model trends do not match linear trends calculated from the MMH archive. Some of the A1B model runs used are not fully documented in CMIP-3 archive.

      I’m still planning a post on these issues, once the Wegman report quiets down.

  89. PolyisTCOandbanned

    Yeah…there are a bunch of issues with the paper. My concern is much more the reticence to discuss them, erasing his mistakes, locking a thread, not engaging on the content, defensiveness, etc. There is a moral failing.

    It also points to the totally unsatisfactory model of “publishing” in his own blog that he can edit or erase at any time.

  90. Meanwhile, Bjorn says everyone should stop fear-mongering. In the comments, on Page 8 oldest-to-newest, Tom Harris of the ICSC is still arguing that humans aren’t causing dangerous global warming anyway.

    His bunch have started a handy list of “climate scientists” who don’t agree humands are causeing dangerous global warming.

  91. Oops, here’t the link to L:omborg’s piece in the Globe & Mail yesterday:

    If you are a someone who has professionally studied the causes of climate change ….”

    What if you are a nobody who … oh, never mind.

  93. Petermann Ice Island – Now There Are Two
    “Petermann Ice Island (2010) has now broken into two parts.
    The importance of the Petermann Glacier calving to climate science is not so much that it happened, but that it was predicted to happen. Quite a few predictions were made by people working independently as individuals or groups and using different techniques for prediction.
    The incontestable fact that the calving was predicted using the scientific method – and that it happened – is a public demonstration of the power of science to predict the future. This evidence of the validity of the scientific method should be enough to convince any rational person that when climate scientists from the world’s nations agree that the world’s climate is changing, then it is changing.”

  94. The GWPF/Montford thing has been released, I see.

    Took a quick glance, and this jumped out at me on page 16.

    34. When it came to the questioning of Phil Jones, on the other hand, there was little or no effort by most committee members to question him or Edward Acton in detail, and few of the responses were subject to challenge or follow-up questions. The sole exception was Graham Stringer MP.

    Did everyone catch that? The first sentence is a solid soundbite/dogwhistle for much quoting on the blogosphere which is then followed by a complete contradiction.

    The Parliamentary inquiry did not try to quiz Jones or Acton, but Graham Stringer, who asked 50% of the questions, did. In other words, the inquiry didn’t, but it did.

    What a load of pretentious tosh.

  95. I too got as far as p16 where Montford is even now still trying to spin ‘trick’, ‘hide the decline’ and making accusations of deleting data and dishonestly splicing graphs.

    At that point I realised any further time spent on the writings of this apparatchik bot was time wasted. As JB says, it’s one overlong, verbose dog whistle trying its damndest to squeeze but one single drop of blood out of the long fizzled out stone the septics lovingly call ‘climategate’.

    It’s not like Bishop von Daniken was ever going to admit his precious book is one long, ill-researched slander.

  96. Rattus Norvegicus

    I also like how he conveniently ignores the (real) finding of the IOC which said that they should have turned over the info to Holland. The thing that these bozos ignore is the prima facie evidence that no emails were deleted (except possibly from inboxes, which is done all the time). That evidence, in case people are really dense, is the large number of emails in the purloined collection which are directly responsive to the Holland FOI requests.

    Really, they got nothin’.

  97. Best CIF quote on the report so far.

    “When I went there I found what appears to be an extended blog post masquerading as an official document.”

  98. Climate economics… is Tol talking bx or am I? Money ought to be on me, Tol being an economist. But I don’t think the numbers from Nordhaus (2002) can be compared directly the way Tol is using them.

  99. A clever parody:

    “This is a news website article about a scientific paper…”

  100. New Statesman has listed McIntyre in their list of Top Ten People of 2010, and his admirers are out in droves defending and lauding him. If you fancy making corrections to some of the comments, the link is:

    No need to sign up, just make a post like you do here.