Ezra Klein, Kevin Drum, and Ryan Avent all have posts up that attack George Will’s statement that “If you’re 29, there has been no global warming for your entire adult life.” Kevin Drum describes this as “idiotic,” Ryan as “moronic,” and Ezra responds, of course, with a chart. So does the always-numerate Kevin Drum, and I’ll use his version of the chart:
The funny thing is that if you zoom in on about the last ten years, you see this:
There has not been a lot of measured warming for the last ten years.
It’s hard to dispute this. What Ezra, Kevin, and Ryan are arguing is the idiotic, moronic, or whatever notion that the past ten years of data disproves the theory of AGW. Their basic argument is “sure, but look at the long-term trend.” I agree with them about the conclusion that the last ten years of raw data don’t falsify the theory (and have argued this at many times in many places), but I’m not sure any of them have thought through this question fully.
If I observe that it is cooler in New York today than yesterday, no reasonable person would take that as proof that AGW theory is wrong. On the other hand, if we had rapid growth of human population and rapid fossil-fuel-dependent economic development for the next 1,000 years with no increase in surface temperatures, no reasonable person would claim that AGW in anything like its current form had not been disproven. The question is at what point between 1 day and 1,000 years do I have enough evidence that I can reasonably reject the theory? It seems to me that you need a rational standard to answer this question before you simply call ten years “moronic” a priori.
In fact, it’s more complicated than that. If we had no warming over the past ten years (true) and lots more CO2 in the air (true) but also a huge increase in volcanic activity (not true, but posited as an illustration), this would not be evidence that AGW theory was untrue, because the models used to predict warming would have called for no warming because all the particulate matter thrown up by the volcanoes should offset the effect of the CO2. So what we are really looking for is the degree of divergence between the predictions of the models used as the basis for long-run warming predictions versus actual temperatures, in order to falsify or corroborate the operational theory that we can predict future long-run temperature impacts attributable to CO2 emissions. The rigorous version of the question then is: What is a valid falsification period for AGW models?
So, naturally we just go to the escrowed set of AGW models with their predictions made over the past 20 years or so, enter in all data for actual emissions, volcanic activity, and other model inputs for the time from the prediction was made until today, and then run the mdoels and compare their outputs to actual temperature change in order to build a distribution of model accuracy, right? Ha ha. Needless to say, no such repository exists.
Almost all humans resist management and audit, and climate modelers are no exception. Because they have been so poorly managed, we have no well-structured program to evaluate accuracy, and instead must rely only on back-testing (or what among climate modelers is termed “hindcasting”). Now, this would be hard to do, for several reasons: The models (we believe) keep improving, so the accuracy of a 1988 model doesn’t necessarily tell us the accuracy of a 2008 model; there is huge signal-to-noise, so it requires several decades (we believe) to have a useful measure of accuracy, while we are being asked to make policy questions now; and so on.
But the instincts of those who are grasping for some way to hold the tools used to make temperature predictions accountable to reality in some way are sound, even if their method is somewhat misguided. They aren’t idiots or morons, they’re just not specialists, and the government they pay for, which in turn funds the model construction project, hasn’t bothered to do its job and provide the best feasible measurements of the value of these models.