In his Philosophical Essay on Probabilities, the early-19th-century philosopher Pierre-Simon Laplace considered the question of what it would take to understand the clockwork universe then regnant in the minds of intellectuals. The imaginary intelligence, which came to be known as “Laplace’s Demon,” would simply need to know the position and momentum (yeah, yeah) of every particle in the universe at a given time, and from there it could extrapolate the future in its entirety:
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
“Similarly,” Stephen Hawking writes, “if you knew it in the future, you could calculate what it must have been in the past. The advent of quantum theory in the 1920s reduced the amount one could predict by half,” and it’s been going down since. Around the same time as quantum theory was being developed, in the field of economics Ludwig von Mises was developing a complexity-based theory of his own, the famous socialist calculation problem — arguing that, without the information communicated by market prices, economic calculation is not inefficient but impossible, and that the so-called scientific socialists, looking down at their five-year plans and their model villages like an archduke playing with his orrery, could not, in fact, actually do what they purported to want to do: rationally manage industries and national economies.
Markets, the brain, and weather are among the textbook examples of complex systems, and they have something in common: Their behavior cannot be calculated beforehand. There is no Laplace’s Demon, especially not for human systems. You never have the same party twice, or the same traffic jam. “The behavior of some simple, deterministic systems can be impossible, even in principle, to predict in the long term,” writes computer scientist Melanie Mitchell of the Santa Fe Institute.
So, back to my original question: How confident should we be that our policies will produce the desired outcomes? That will depend in some part on how complex the system is that you are attempting to influence. Housing and mortgage markets are very complex, and politicians’ efforts to turn them to their own ends went very badly in 2008, and will go very badly again in the future. Health-insurance markets and medicine are both very complex, and we see how political efforts to manage those have been going.
Operating hospitals is a complex business, too. Consider a counterexample: Our food-stamp program has many problems, but imagine what a Hieronymus Bosch nightmare it would be if, instead of the current practice of giving poor people vouchers for food, we applied the VA model and attempted to have the government deliver the service itself rather than simply paying for it. That would mean federally operated farms, ranches, and slaughterhouses, government grocery stores, warehouses, distribution centers, transportation networks, etc., all managed with the competence and decency exhibited by the VA. Rather than trying to politically steer the extraordinarily complex system of producing and distributing food — rather than biting off way more than we can cognitively chew — we instead chose the relatively simple method, giving poor people vouchers for food. Of course that has its problems and unintended consequences, but they are milder than, say, national famine, which is probably what would come of government-run agriculture. We let the complex problem of food production meet the complex solution of the market.
Not every regulation or government program is doomed to fail. But we might consider the slightly terrifying possibility that when government does get something right, it does so by accident, temporarily, and for reasons that it cannot understand or replicate. This may be why the sheer volume of law and regulation has been climbing so rapidly: Intuiting its own inefficacy, Washington is throwing everything at the wall and seeing what sticks. The Entity with Whom politicians sometimes confuse themselves needed only ten commandments, not the ten thousand a year that Washington produces. Some of those coming down in the near future will be intended to reform the VA. The rational thing to do would be to abolish it. We’d be far better off paying veterans’ medical bills out of the Treasury than trying to operate a network of hospitals and clinics. And no matter what Washington promises to do to solve this problem, it is a good bet that the policy enacted will not produce the result intended. Reform is a random walk.
Another feature of complex systems is that some of them are very sensitive to initial conditions, as expressed by the butterfly effect. It may be the case that things have gone as well as they have for us in the United States not because of any current policy or because of the unique genius and saintliness of our national leadership as currently constituted, but simply because the right people with the right prejudices did the right things for a relatively short period of time in the 18th century, and what we have now is very little more than the compounded returns on that cultural windfall. That seems to me a more likely explanation for our relatively happy and secure place in the world than that we were led to this point by the kind of thinking, and the kind of men, who brought us the VA hospitals and those dead veterans.
— Kevin D. Williamson is roving correspondent for National Review.