If you are going to claim that someone’s policy will cause upward of 200,000 deaths, I feel that you should have relevant supporting evidence. Maybe I’m just old-fashioned that way. Certainly, no such standards seem to hamper the editors at Vox.
Instead, they’ve just published “208,500 additional deaths could occur by 2026 under the Senate health plan,” in which Ann Crawford-Roberts et al. assure readers that they are using “solid estimates firmly rooted in scientific evidence — unlike the dubious claim that the ACA has saved ‘zero’ lives.”
Except here’s the thing: That claim about zero lives saved is supported by multiple independent lines of analysis. (To be precise, the claim is that “the best statistical estimate of the number of lives saved each year by the ACA is zero,” in large part because the ACA’s main effect has been to expand the notoriously ineffective Medicaid program.) There are the numerous studies showing that patients on Medicaid achieve worse health outcomes than those without any insurance. There is the “gold-standard” randomized controlled trial in Oregon that found no significant improvement in physical health from Medicaid coverage. There is work by economist Raj Chetty that found health-care access was not a determinant of life expectancy for low-income households. There is a paper from Yale researchers that found states achieve better health outcomes when they allocate less of their social spending toward health care.
And now we even have data from the ACA itself. As I have shown, the nation’s mortality rate stopped decreasing and actually increased when the ACA was implemented, and matters were worst in the states that accepted the ACA’s Medicaid expansion. A new working paper at the National Bureau of Economic Research likewise finds no significant improvement in self-reported health.
So if all that amounts only to a “dubious claim,” what must the “solid estimates firmly rooted in scientific evidence” look like? Well, it’s three studies. All with the same lead author. Two of them are analyses of the same Medicaid expansion from the same time period in the same three states (Arizona, Maine, and New York). So really it is one study of Arizona, Maine, and New York, and then a separate study of Massachusetts.
The three-state studies found no significant reduction in mortality from expanding Medicaid in either Arizona or Maine, only New York. So one might conclude that in some circumstances Medicaid may have a positive effect; in others, not. But it is in Massachusetts where things get fully derailed, because while the goal here is to show that a Medicaid-heavy reform reduces mortality, the Massachusetts policy did little to expand Medicaid.
For its estimate of 200,000 deaths nationwide, Vox relies on reductions in mortality achieved by Massachusetts when the state pursued health-care reform in mid 2006. The authors consider this a valid proxy for the ACA because “the national patterns [of coverage gains] are not so different from what those were in Massachusetts.” Specifically, they say that 47 percent of the coverage gain in Massachusetts came through Medicaid. This is not correct.
Their supporting evidence is a report that shows 47 percent of the growth in Massachusetts insurance coverage between 2006 and 2010 came from growth in Medicaid. But that’s the wrong time period. The increase in coverage produced by reform was over by the end of 2008 and Medicaid accounted for less than 20 percent of that increase; more than 80 percent was in the private market. The Medicaid growth comes in 2009 and 2010, when coverage shifted away from the private market, likely as a result of the economic downturn.
Even crediting Medicaid with 20 percent of the coverage increase from the Massachusetts reform overstates its role, because the Medicaid rolls were growing everywhere. From mid 2006 through 2008, the CDC reports that nationwide Medicaid rolls grew by 10.7 percent. In Massachusetts, they grew by . . . 10.7 percent. So Medicaid growth can’t explain how Massachusetts achieved better mortality results than other states. (Conversely, private-sector coverage fell by 0.6 percent nationwide over the period, whereas in Massachusetts it grew 7.9 percent, so that is the reform-driven change that could plausibly have lowered mortality in the state.)
That claim about zero lives saved is supported by multiple independent lines of analysis.
This matter of time frames is doubly important because the entire improvement in mortality achieved by Massachusetts occurred in 2007 and 2008. To use the Massachusetts study as evidence that a Medicaid expansion will reduce mortality, one must credit Medicaid increases in 2009–10 for mortality declines in 2007–08. That seems . . . unlikely.
So the “solid estimates firmly rooted in scientific evidence” rely entirely on one author’s study of four states, of which only New York appears to show that expanding Medicaid reduces mortality. Every other attempt to establish that conclusion has failed, and most point squarely against it.
None of that makes Medicaid worthless. It does not mean that Medicaid, or the ACA generally, is killing people (though the evidence for that proposition looks as good as the evidence for the idea that it is saving many lives). It just means that the only “solid estimate firmly rooted in scientific evidence” remains the null hypothesis. Maybe it’s not the best clickbait headline, but the best statistical estimate of the number of lives saved each year by the ACA is still zero.
— Oren Cass is a senior fellow at the Manhattan Institute.