The idea that that Republicans and conservatives are waging a “war on science” has become a staple of Democratic rhetoric. Hillary Clinton frequently referenced this in her campaign speeches. Chris Mooney, who wrote a book by this name, has an article on it in a recent issue of The New Republic. Daniel Engber has a three-part series on this topic in Slate. This idea has become widespread among liberals — and, unfortunately, many scientists.
Mooney’s thesis is that it’s all been downhill for the relationship between science and politicians since the “halcyon years” of the 1950s. His explanation for why this happened is sociological and political: Republican politicians and their supporters didn’t like the conclusions that some scientists reached, so they tried to stonewall or devalue the science. As I have written about at length, there is something to this. But Mooney, Engber, and others fail to consider an additional possible cause for the changed relationship between science and politics today versus 60 years ago: the kind of science used to inform public debates has changed.
One of the most famous (and probably apocryphal) stories in the history of science is that of Galileo dropping unequally weighted balls from the Tower of Pisa in order to demonstrate experimentally that, contra Aristotle, they would not fall at different rates. To the modern mind, this is definitive. Aristotle was one of the greatest geniuses in recorded history, and he had put forward seemingly airtight reasoning for why they should drop at different speeds. Almost every human intuitively feels, even today, that heavy objects fall faster than light ones. In everyday life, lighter objects will often fall more slowly than heavy ones because of differences in air resistance and other factors. Aristotle’s theory, then, combined evidence, intuition, logic, and authority. But when tested in a reasonably well-controlled experiment, the balls dropped at the same speed. Aristotle’s theory is false — case closed. This idea of the decisive experiment is not the totality of the scientific method, but it is an important part of it.
Now, we can very closely approximate the gravitational forces that govern the rate of descent of a ball applying Newtonian physics to the earth and the ball, while ignoring everything else. But this is only an approximation, since every object in the universe with mass actually exerts some gravitational attraction on both the earth and the ball. In part, this approximation works because gravitational force attenuates with distance by the inverse square law, so the force being exerted on the ball by, for example, the moon is comparatively tiny. These effects are so minute that scientists were able to demonstrate that Galileo’s finding was approximately valid — in fact, valid to within the measurement tolerance of available instruments — through all kinds of replicated experiments across Europe.
But suppose that gravity didn’t attenuate in this way, and if you let go of a ball it’s rate of descent might vary in all kinds of extraordinarily complex ways, the measurement of which exceeded the capacities of the best devices and computational facilities available in the world, because it mattered a lot exactly where you were versus every object with mass in the universe. There would be no way to isolate a component of the total system that had sufficient simplicity to allow us to conduct replicated experiments. We would be trapped by what we might call integrated complexity. If this were the case, Galileo might have had some perfectly true theory of gravity, but be unable to design an experiment with sufficient precision to “prove” that he was right (or more technically, show that his theory passed repeated falsification tests of the kind that Aristotle’s theory failed). He would be forced to do a funny kind of science: a science without experiments. We’d probably still be arguing about who was right.
The trend since the 1950s has been that policy-relevant science has become increasingly resistant to falsification testing, because it tends to address scientific questions of integrated complexity. In the introduction to his book, Mooney provides the following list of government entities as the places where the Republican war on science has been most severe: the Department of the Interior (focusing in the book on the Fish and Wildlife Service), the National Cancer Institute (focusing on the epidemiological debate about the purported abortion–breast cancer linkage), the CDC, FDA, EPA, and NOAA (focusing on global warming).
When seen in this light, there is an obvious pattern: These examples are largely drawn from environmental science, systems science, epidemiology, and other fields dominated by integrated complexity. Note the lack of agencies that conduct research in physics, electrical engineering, and the other fields that dominated the executive-level dialogue between scientists and politicians during the Eisenhower administration. The science that informs public debate increasingly can not use experiments to adjudicate disagreements, and instead must rely on dueling models. We wouldn’t purposely expose randomly selected groups of people to lead paint, and couldn’t build parallel full-size replicas of earth and pump differing levels of CO2 into them.
Limited opportunity for falsification testing is an important characteristic of the two topics that Mooney emphasizes in his recent article, both of which he details in his book: global warming and the “Star Wars” missile defense system.
Consider global warming. No serious scientist has ever disputed that CO2 is a greenhouse gas, since it has been shown in replicated laboratory experiments to absorb and redirect infrared radiation. The key open scientific question has been the net effect of increasing CO2 concentrations after climate feedbacks. This is a problem of integrated complexity. We can’t even approximately isolate a component of the climate system because these feedbacks are predicted to occur over decades and are globally interconnected; for example, polar ice caps melt, which changes ocean circulation patterns in the Atlantic which changes cloud formation in Florida and so on. We have constructed large computer models to represent and predict the integrated global climate system, but how do we know they are right? Not absolutely certain, but certain to the degree that we know that CO2 is a greenhouse gas or that unequally weighted objects will fall at the same rate in a vacuum? We don’t, and we can’t, because we can’t conduct decisive experiments to test them.
Or consider Star Wars — which, on the surface, seems to be an example of old-school physics, optics, and rocketry. Mooney’s book describes the scientific debate moving over a period of years in the 1980s, from skepticism over our ability to develop effective system components, such as lasers of sufficient power and mobility or sufficiently accurate tracking systems — all of which have subsequently been proven feasible by the decisive experiment of actually shooting down test missiles — to the “even more fundamental” problem of having reliable system-control software. He cites an Office of Technology Assessment study from 1988 that noted the system “would stand a significant chance of ‘catastrophic failure’ due to software glitches the very first — and presumably, only — time it was used.”
This is precisely the point that I remember being made in a campus-wide debate on Star Wars when I was at MIT in the 1980s. The speaker indicated what we all knew to be true: when de-bugging complex software, even after you’re done with formal testing, you start to use it in practice against more and more cases, and fixing problems that become apparent until you stop getting errors. But because every line of the huge code base interacts, even when you complete this procedure, you never know if there is some error hidden somewhere in the code that will only be revealed in some unanticipated use case. Of course, this is precisely integrated complexity.
There is a spectrum of predictive certainty in various fields that label themselves “science,” ranging from something like lab-bench chemistry at one extreme to something like social science at the other. Scientific fields that address integrated complexity sit in a gray area somewhere in between. We can pound the table all we want, and say with smoldering intensity that “science says X,” but our certainty is much lower when X = “the projected change in global temperatures over the next 100 years” than when X = “the rate at which this bowling ball will fall.”
Serious scientists in fields dominated by integrated complexity are constantly trying to develop methods for testing hypotheses, but the absence of decisive experiments makes it much easier for groupthink to take hold. A much larger proportion of scientists self-identify as liberal than conservative, so when scientific questions of integrated complexity impinge on important political questions, the opportunities for unconscious bias are pretty obvious. Hasty conservative political pushback (e.g., “global warming is a hoax”) naturally creates further alienation between these politicians and scientists. The scientists then find political allies who have political reasons for accepting their conclusions; consequently, many conservatives come to see these scientists as pseudo-objective partisans. This sets up a vicious cycle. Unfortunately, that’s where we find ourselves now in far too many areas.
The way forward from this morass is closer engagement with science by conservatives. As a starting point, we should work to elevate the role of experiments whenever possible. Recent laudable efforts to demand randomized field trials when evaluating education and other social science programs — much as we demand clinical trials prior to drug approvals — are a great example of this. Focus on third-party forecast model validation for the global climate models used to make climate change predictions is an excellent example of another step that should be taken. In addition, wherever experiments are simply not practical, conservatives need to get into the details of the science in order to understand degrees of uncertainty. We like to think of science as providing black-and-white answers, but when we are faced with integrated complexity, it’s all shades of gray.
– Jim Manzi is the CEO of an applied artificial-intelligence software company.