The Agenda

Brief Thought on David Brooks and Mechanistic Macro Models

Boy, people have really misunderstood David Brooks’s recent column on how to think about the economy. David has been accused of dismissing the importance of evidence and numbers, which doesn’t strike me as a fair characterization. There’s some confusion here, and David is partly to blame. The real problem is that many of the models we rely on to guide economic policy mechanistic models that are actually recursive in nature. 

Here is David:

 

The economic approach embraced by the most prominent liberals over the past few years is mostly mechanical. The economy is treated like a big machine; the people in it like rational, utility maximizing cogs. The performance of the economic machine can be predicted with quantitative macroeconomic models.

These models can be used to make highly specific projections. If the government borrows $1 and then spends it, it will produce $1.50 worth of economic activity. If the government spends $800 billion on a stimulus package, that will produce 3.5 million in new jobs.

Everything is rigorous. Everything is science.

Here is Ezra Klein on why liberals like numbers:

 

Liberals, by contrast, think businesses are worried about the future, but they think the problem is that there’s no demand and too much tail risk in the economy. Why invest in more capacity before the employment levels necessary to support that capacity come back? Why stop hoarding cash when Ireland could default and force another panic in the credit markets?

The preference for numbers that Brooks identifies comes because, well, these arguments have numbers and evidence behind them.

Much depends on which models we’re talking about. Some of these arguments have numbers and evidence behind them, and some of them wrap themselves in a cloak of numbers and evidence that conceal irresponsible and unwarranted leaps, as we saw during the debates over ARRA and PPACA. 

Remember what happened when three researchers at the University of Michigan, Matthew Shapiro, Claudia Sahm, and Joel Slemrod, offered empirical evidence suggesting that the Making Work Pay tax credits didn’t work very well? As my colleagues at Economics 21 noted, there was some very odd and unconvincing pushback from Jared Bernstein. The models used to predict that the Making Work Pay tax credit would prove successful were based on experiments that used the responses of college students under laboratory conditions to predict how a diverse array of U.S. households would behave. 

And then there were the models that informed the larger fiscal stimulus effort. Here is how John Cochrane of the University of Chicago’s Booth School described them:

 

Bernstein and Romer’s CEA report on the stimulus famously used a multiplier of 1.5 to evaluate the effects of the stimulus. They took this multiplier from models (p.12). But the multiplier is baked in to these models as an assumption. They might as well have just said “we assume a multiplier of 1.5.”

More deeply, why use the multiplier from the model, and not the model itself? These “models” are after all, full-blown Keynesian models designed purposely for policy evaluation. They have been refined continuously for 40 years, and they epitomize the best that Keyesian thinking can do. So if you believe in Keynesian stimulus, why use the multiplier and not the model?

The answer, of course, is that they would have been laughed at – nobody has believed the policy predictions of large Keynesian models since Bob Lucas (1975) destroyed them.  But how is it that one multiplier from the model still is a valid answer to the “what if” question, when the whole model is ludicrously flawed? If you believe the Keynesian model, let’s see its full predictions. If you don’t believe it, why do you believe its multiplier? [Emphasis added.]

This is what I believe David was getting at — empirical evidence is not the problem. Rather, the problem is overreliance on crude, mechanistic models. I use the term overreliance advisedly. Mechanistic models can serve a valuable purpose. But they’re a starting point, particularly when we’re entering new, unexplored terrain. And a dynamic economy is always generating one-off events. I actually think the Making Work Pay experiment was a decent idea. It’s good to see how a number of insights from behavioral research play out in practice. But the broader project of fine-tuning a complex economy is destined to run into all kinds of unanticipated problems, hence the virtue of decentralization. 

Ironically, David is making what I take to be a Keynesian point: psychology matters, and quantifiable risk is meaningfully different from Knightian uncertainty. 

(Jim Manzi has written a great post on this theme that has attracted some slightly obtuse replies.) 

Reihan Salam is president of the Manhattan Institute and a contributing editor of National Review.
Exit mobile version