The United Nations has just released, with much hoopla, a summary of its report on the current state of global-warming science. This is the fourth in a series of reports that the U.N. Intergovernmental Panel on Climate Change (IPCC) has issued every five years since 1996. From 1996 through today, these reports have asserted a steadily increasing level of certainty that human activities cause global warming. The current summary indicates that the IPCC is “90% confident” that we have caused global warming. The summary further implies that if we double the concentration of carbon dioxide (CO2) in the atmosphere, the IPCC is 90 percent onfident that we will cause further warming of 3° C /- 1.5° C.
But what do these statements of confidence really mean? They are not derived mathematically from the type of normal probability distributions that are used when, for example, determining the margin of error in a political poll (say, /- 5%). IPCC estimates of “confidence” are really what we would mean by this word in everyday conversation–a subjective statement of opinion. This is a very big deal, since bounding the uncertainty in climate predictions is central to deciding what, if anything, we should do about them.
What We Know
Three key assertions in the global-warming debate are supported by a very large body of evidence. One, the concentration of CO2 in the atmosphere has increased by approximately 35 percent over roughly the last 150 years. Two, global temperatures have risen something like 0.6° C over this period. Three, CO2 is a greenhouse gas, meaning simply that it absorbs and redirects infrared (longer-wavelength) radiation but not shorter-wavelength radiation.
It’s highly plausible to proceed from these three points to a fourth assertion, namely, that the increase in CO2 caused the increase in temperature. The basic physics behind such a scenario are quite clear. Essentially, the sun constantly bombards the earth with a significant amount of high-energy radiation with short wavelengths, such as visible light. Some portion of this is temporarily absorbed by the land and oceans, where it does work by moving electrons around. This work consumes energy, so that a significant portion of the radiation that is subsequently re-emitted by the Earth is lower-energy / longer-wavelength infrared radiation. As the re-emitted infrared radiation travels through the atmosphere on its way back to space, some of it is absorbed by CO2 molecules and then scattered, so that some portion of this absorbed energy is then re-directed back towards the Earth. All else equal, the more CO2 molecules in the atmosphere, the hotter it gets. Doubt this and you doubt the last 120 years of particle physics.
In a simplified model of the planet, in which the complexities created by things like water vapor, convection, clouds, trees, polar ice caps, and so on are all ignored, it is pretty straightforward to estimate the warming impact of increasing concentrations of CO2. But here’s the trick: The Earth is nothing like that planet. Any change, including pumping out more CO2, initiates an incredibly complicated set of feedback effects. Some of these will tend to magnify the greenhouse warming impact, and others will tend to dampen it. Famously, as the atmosphere heats up, polar ice caps tend to melt; this reduces the amount of solar radiation that is reflected and therefore causes further heating. On the other hand, more CO2 should lead to faster plant-growth; this pulls CO2 out of the atmosphere and therefore reduces warming. The list of such potential effects is very long; many of these feedbacks effects interact with one another; these interactions interact with one another; and so on ad infinitum.
The entire legitimate scientific debate is really about these feedback effects. Feedbacks are not merely details to be cleaned up in a picture that is fairly clear. The base impact of a doubling of CO2 in the atmosphere with no feedback effects is on the order of 1° C. The IPCC estimate of the impact of doubling CO2 is about 3° C. So the feedback effects in the IPCC scenario dominate the prediction.
While it is a theoretical possibility that all the feedback effects together could lead to actual cooling, it is counter-intuitive and highly unlikely. Feedback effects could, however, easily dampen the net impact so that it ends up being less than or equal to 1° C. In fact, the raw relationship between temperature increases and CO2 over the past century supports that idea. The U.N. IPCC estimate is based on a set of feedback effects that are assumed to massively amplify the base effect. Uncertainty about feedback effects isn’t a marginal issue, but goes to the heart of how much, if at all, we should be worried about global warming.
What We Don’t Know
Over the past several decades, in order to account for feedback effects, teams in multiple countries have launched ongoing projects to develop large computer models that simulate the behavior of the global climate. Roughly speaking, these models divide the surface of the Earth plus its atmosphere into a set of slices, usually about 200 kilometers on a side and about a kilometer thick. A set of rules for how the slice-shaped elements in the model interact with one another is established based on our understanding of atmospheric physics, e.g., if an element heats by X° C, then within the next hour the adjacent elements will heat up by Y° C. A set of initial conditions is estimated for things like the current temperature of each element. The model then advances to the next hour based on the set of rules. Each element then has a new value. Then the model advances through the following hour of changes, and so on for a simulation of many years of climate evolution.
These models are the basis for the oft-cited predictions of how much global temperatures will rise based on CO2 emissions. As with all models, they are approximations to reality. Scientists in any specialty normally evaluate the reliability of such simulation tools by asking two questions: 1) Are the quantitative relationships within it based on a reasonably complete set of proven physical laws?, and 2) How accurately does it predict future outcomes given complete input data? For climate models, the answers are “partially” and “unknown.”
Climate modelers tend to be smart and dedicated. They use known laws of atmospheric physics to establish the rules in these models whenever possible, but there are big gaps. Most obviously, the bulk of the real physics of convection, cloud formation, and so forth happens at scales much smaller than 14,000 square miles. This physics must therefore be represented at a combined and gross level by parameters for each element that are determined by the modelers. Competent modelers attempt to ground these parameters in physical laws as best as possible, but they represent estimates of a compilation of many smaller-scale processes. Even if the physics of each of the smaller-scale processes is perfectly understood, the parameters are still a piece of patchwork, and large uncertainties are inherent in them. Even more fundamentally, the physics for some of the feedback effects believed to be most important is not well understood. And finally, many plausibly hypothesized feedback effects that could massively influence temperature are not included in the models at all. These models are complicated as compared to simulation models used in some other fields, but are extremely simplistic as compared to the actual global climate.
When evaluating model reliability, the second test–can it predict accurately?–is the acid test. We can debate all day about whether a model is complete enough, but if it has correctly predicted major climate changes over and over again, that is pretty good evidence that its predictions should be taken seriously. There are plenty of studies that show what is called “hindcasting,” in which a model is built on the data for, say, 1900-1950, and is then used to “predict” the climate for 1950-1980. Unfortunately, it is notoriously common for simulation models in many fields to fit such holdout samples in historical data well, but then fail to predict the future accurately. So the crucial test is actual prediction, in which a model is run today to forecast the climate for some future time-period, and then is subsequently validated or falsified. No global climate model has ever demonstrated that it can reliably predict the climate over multiple years or decades–never.
The available evidence indicates that it is probable (though not strictly scientifically proven) that human activities have increased global temperatures to date and will likely continue to do so. But in spite of all the table-pounding, nobody can reliably quantify the size of these future impacts, or even bound them sufficiently to guide action. The total impact of global temperatures over the next century could plausibly range from negligible to severe. Long-term climate prediction is in its infancy, and improved forecast reliability is crucial to enable useful guidance for policymakers. Better science could give us what is most need in this debate: more light and less heat.
– Jim Manzi holds a degree in mathematics from MIT and is the CEO of an applied artificial intelligence software company.