Veronique — précisément.
This lack of accountability was obvious (and widely noted), of course, at the time the prediction was made by Team Obama that the stimulus spending would save a certain number of jobs. Here’s what I said then:
Suppose Ms. Famous Economist X predicts that “Unemployment will be about 10% on 1/1/10 without the bill, and about 8% with the bill”. What do you think will happen when New Year’s day 2010 rolls around and unemployment is 9.8%? I think it’s a very, very safe bet that Ms. X will say something like “Yes, but other conditions deteriorated faster than anticipated (who could have guessed that China would do a massive currency revaluation in the summer of 2009?), so if we hadn’t passed the stimulus bill, unemployment would have been more like 12%. So you see, I was right after all; it reduced unemployment by about 2%.” This is the problem with such non-experimental sciences – we have no way to measure the counterfactual.
Here’s how I proposed at the time making a version of such a prediction which would actually allow for at least some accountability:
Anyone who claims to know the impact should escrow a copy of the source code of the econometric model that is used to make the prediction, along with a stated confidence interval, operational scripts, and assumptions for all required non-stimulus inputs that populate the model with a named third-party. Upon reaching the date for which the prediction is made, the third-party should run the model with the actual data for all non-stimulus assumptions and compare the model result to actual. Any difference would be due to model error. We actually still would not be able to partition the sources of error between “error in predicting causal impact of stimulus” and “other”, but at least we would have a real measurement of model accuracy for this instance.
To my knowledge, no government or academic macroeconomists actually did this. I wonder why not?