NRPLUS MEMBER ARTICLE I have precisely zero envy for anyone trying to predict where the COVID-19 pandemic is headed. There’s a ton we don’t know about how quickly the virus spreads under different conditions, what new treatments will appear, how policy will change in the future, and how well the public will practice social distancing during the summer regardless of policy. Even the world’s top epidemiologists cannot be expected to give us more than educated guesses that are based on reasonable, clearly explained assumptions — and that improve as new information comes in.
Some of those guesses can be quite valuable even as they’re quite uncertain. Given the massive funding behind it and the faith placed in it by the now-moribund White House Coronavirus Task Force, I once hoped that the COVID-19 model from the University of Washington Institute for Health Metrics and Evaluation (IHME) would prove to be one such valuable tool. But those hopes have been dashed: The model simply doesn’t work, and the folks behind it are still futzing with its fundamental workings as the pandemic enters its least predictable phase.
During this crucial period when the country is reopening, we can have no faith in the model’s output. It shouldn’t be used to inform policymaking decisions at all. This isn’t to slight the scientists who took on such a difficult task; it’s just a realistic assessment of how the project turned out.
* * *
When modeling epidemics, scientists typically try to simulate the way a virus spreads: exponentially at first, because each infected person interacts with many other vulnerable individuals, and then slowing down as the population either gains immunity or takes deliberate steps to reduce transmission. This process can be modeled in very general terms, or by simulating the specific interactions and infections of millions of people as the Imperial College COVID-19 model does. Either way, the result’s utility is limited by the fact that researchers had to make a bunch of assumptions to arrive at it. Exactly how quickly does the disease spread when left unchecked? How much do people reduce their interactions when advised or legally required to practice social distancing? Which types of interactions are most dangerous? Different answers to these questions can yield very different modeling results.
The IHME model was meant to sidestep that issue. Rather than re-creating the underlying processes through which a disease spreads, it looked at what had actually happened in other countries during this pandemic. It “fit a curve” connecting trends in the U.S. with trends in other places, showing us where we’d end up if things worked out the same way as they had for those places.
There were some hiccups almost immediately: Early IHME estimates were of 100,000 to 200,000 deaths, but the number soon dropped to 80,000 and then even lower. As I pointed out at the time, at least some of these revisions were easily justified. The researchers were getting important new data — including death trends from countries whose pandemics had recently peaked and updated information about how many Americans were hospitalized for each death that occurred. Models should change when better information comes in. That’s how they’re calibrated to make better predictions in the future.
Unfortunately, it turned out that the model had far deeper problems than incomplete information. One was that, when fitting those curves, it operated under an assumption that states’ curves would generally be symmetrical — that deaths would fall as quickly as they had risen. There is no reason this has to be the case, and indeed, imposing a specific shape like this undermines the entire point of drawing on other places’ experiences to see where we could end up.
Another problem was that the damn thing simply didn’t work. It wasn’t and isn’t some conspiracy to over- or under-predict the toll of COVID-19; it just has not been very accurate. Last month, a group of researchers pointed out that the model was failing to make even the shortest-term predictions accurately at the state level. Its guesses were so far off that even its “95 percent” intervals didn’t include the correct value 70 percent of the time. The group updated its paper a few weeks later, finding that the predictions were no better in later versions of the model (though the accompanying intervals had widened, so they included the correct number far more often).
The IHME model’s national-level predictions weren’t quite that bad, but at one point they ticked down to about 60,000. We’re already at 70,000 today.
* * *
We are now entering the least predictable phase of the pandemic. The weather is warming up, some states are officially reopening, and plenty of individuals are venturing out into the world more. Ideally, by this point a model would be well-tested, refined, and ready to face new challenges. Instead, the IHME model was completely overhauled this week, with the death projections doubling in the process.
Remember above, where I mentioned how other models start with an understanding of how a disease spreads and simulate what happens from there? The new version of the IHME model is a “multi-stage hybrid” that works in elements of that approach. And that’s just a part of what one professional modeler calls an “eye-glazing list of changes.” Another of those changes will allow the death rate to fall more slowly than previously assumed.
These changes very well may be improvements. Hell, at this point there’s nowhere to go but up. But the old version of the model didn’t work, and trusting the brand-new version at such a crucial time is too much to ask.