The Corner

Cautions about Interpreting the Massachusetts Mortality Study

A number of thoughtful and informed commenters have written recently about the potential policy implications of a recent study that asserted a relationship between the implementation of Romneycare in Massachusetts and a reduction in mortality in that state. In my view, before considering possible implications of the finding, they should be very cautious in accepting the premise that this study provides material evidence that Romneycare actually caused a reduction in deaths. 

The study itself has a highlighted caveat that the inability to eliminate possible confounders means that the study cannot definitively establish causality. In my view, this should not be treated as a platitude like “no study is ever definitive,” but rather as a warning that this study should not be taken as an indication of any cause-and-effect relationship between the introduction of Romneycare and a reduction in deaths.

To explain why, let me start with a few rough numbers. The study concludes that Romneycare was associated with a reduction in non-elderly adult mortality of 8.2 fewer deaths per 100,000 persons. The non-elderly adult population of Massachusetts in 2010 was about 4.2 million. 4.2 million x 8.2 / 100,000 = 344. The authors are asserting that they are detecting a reduction of a few hundred deaths per year in Massachusetts associated with Romneycare. 

To evaluate how plausible it is that their method could reliably detect a signal of that size, consider what the study actually does.

One could start an effort to answer the question of whether Romneycare had an impact on mortality in Massachusetts with a simplistic observation like “Mortality dropped in Massachusetts after Romneycare was introduced, therefore Romneycare caused mortality to decline.” To which the obvious reply is “Yes, but how do you know it wouldn’t have gone down anyway?” To which the first refinement might be to compare the rate of reduction in Massachusetts to that of the rest of the U.S. as a whole. To which the reply might be “Yes, but Massachusetts is different from the rest of the U.S.” To which the next refinement might be to compare Massachusetts only to a package of other states that are “like” it. This is a standard analysis you will see in a lot of papers.

What the researchers did here was to take this method one more step. They exploited the fact that mortality and a lot of other data is reported at a county level. Roughly speaking, they took each county in Massachusetts, matched it to a control package of other counties that are “like” it in other states (e.g., Franklin County, Massachusetts would be illustratively matched to one county in Vermont, two in N.Y, one in Rhode Island, and one in Texas). They then looked at the trend in mortality for each county in Massachusetts versus its matched control counties, added these differences back up, and used that to create a refined estimate of impact of Romneycare in Massachusetts.

This is an improvement. This approach can be demonstrated analytically to improve precision materially versus a total-state-to-control-states approach. The open question here is whether it has sufficient precision in this case to detect something like the claimed impact.

There are two good reasons for caution in believing this. And these cautions interact.

First, this is an attempt to evaluate a policy that was enacted for an entire state, and only that state. N = 1. The mortality effects of any policy that was enacted by Massachusetts at the same time, but not executed in the control counties, would be ascribed by this method as an effect of Romneycare. There is no statistical test that can isolate, measure, or control for this effect.

To take an extreme example to make the point clear, if only Massachusetts had instituted a policy of relaxing pollution regulations such that massive amounts of carcinogens were released into Boston at the same time Romneycare was introduced, you would see a huge increase in mortality versus control counties that the researchers’ method would ascribe to Romneycare. The same thing would be true of any external factors that impacted mortality in a systematically different way in Massachusetts than in control counties. To take another extreme example, if terrorists set off dirty nukes in New York, Los Angeles, and Chicago that created thousands of cancers, this method would ascribe all of the lower trend rate of mortality in Massachusetts versus any control counties that happened to fall into those states to Romneycare. 

Of course, we know that neither the government of Massachusetts nor terrorists were poisoning thousands of people in the U.S. in this period. The real concerns are more subtle versions of these kinds of undetected biases between the Massachusetts and the control counties. 

The second caution about this study makes it far more probable that these subtle issues will matter: The authors use counties as their unit of analysis. To state the obvious, counties don’t have health outcomes; people do.  And people aren’t chained to the ground for life. The fastest way to create change in heath outcome for a population isn’t to make sick people healthier; it’s to move people in and out of the population. As far as I can tell, the word “migration” never appears in the paper, and unless I’m missing something, this issue is not even considered.

This seems to me to be a big deal. Over a decade, on the order of 1 million people move into Massachusetts, and a slightly larger number leave. This is gross migration of over 2 million people per decade in a state with a total population of fewer than 7 million people. Remember that the authors are claiming that a policy change in providing health insurance is associated with a total reduction of something like 350 deaths per year. Tiny changes in the health composition of the 2 million people who moved in or out of Massachusetts over the period of this study could easily swamp this effect.

So, combining these two cautions, is it plausible that there could have either been some other policy change in the same period, or some external change that affected Massachusetts differently than the weight-average of the county-level controls, in such a way so as to move the needle on relative mortality in Massachusetts to the extent seen in this study?

To raise the bar for this question, the authors emphasize that the reduction in mortality in Massachusetts appears to happen for adults but not the elderly, and happened disproportionately for the poor, and disproportionately for causes of death amenable to treatment. This is just what you would expect if it was Romneycare causing the effect. Late in the paper, the authors imply that this is strong evidence that we should interpret the association causally, writing that “it is challenging to identify factors other than health care reform that might have produced this pattern of results.” 

So more precisely, is it plausible that there could have either been some other policy change in the same period, or some external change that affected Massachusetts differently than the weight-average of the county-level controls, in such a way so as to move the needle on relative mortality amenable to treatment in Massachusetts for adults, but not the elderly, and disproportionately for the poor?

My first try was to look at schools, since I know that there was a big push on K–12 education in Massachusetts in this period.

As a quick and dirty illustrative analysis, I looked at the performance on NAEP eight-grade reading scores for Massachusetts versus the U.S. average. Here are two simple observations. 

One, the Massachusetts advantage versus the U.S. average in scores improved materially, from an advantage of about eight points in 1998–2002 to an advantage of about twelve points 2003–2009.  It was clear by the mid 2000s that shift was sustained. How do we know that more families that were more prudent (and hence cared more about schools) — and were therefore also more likely to have managed and continue to manage their health better, resulting in better outcomes for heath-care amenable mortality – didn’t tend to leave Massachusetts less or move to Massachusetts more in response to the better schools? That would likely affect non-elderly adults drastically more than the elderly. 

Two, the advantage in scores did not trend up much at all for students well-off enough to be ineligible for school lunch. The whole effect was driven by a large increase in performance for students eligible for school lunch. 

In other words, in the same period that Romneycare was introduced, Massachusetts also executed a policy that made Massachusetts a relatively better destination for prudent poor families with adults in child-bearing years. This could plausibly have changed both in-migration and out-migration in a manner consistent with shifting the non-elderly poor population to one that is healthier with respect to sources of mortality amenable to treatment.

To emphasize a key point, I’m not saying anything approaching “here is the true secret key to explain relative movements in Massachusetts vs. US mortality 2000–2010.” My point is that the study has ignored an obvious potential confounder. I’m a non-expert on the subject who came up with this in an hour. Are there others?

Off the top of my head, a second plausible example of a confounder is that some people who were already sick, or who expected to become sick, elected to remain in Massachusetts rather than moving for exactly this reason. It is precisely those who were non-elderly, poor, and have health problems amenable to care that would have the greatest rational incentive to respond. This would cut in the opposite direction as the schools effect, and tend to make Massachusetts’s relative mortality worse than it otherwise would have been.

A third example is that prudent people, who are likely to take better care of their health, would be expected to have a higher propensity to respond to economic incentives. This might cause healthier people to move more systematically to Massachusetts in response to Romneycare and better educational opportunities, in spite of the fact that the theoretical fully rational response should make sicker people more likely to move in reaction to this change in incentives. Paradoxically, because of differential response propensities, you could end up seeing healthier people move disproportionately in response to changed incentives that make Massachusetts a better deal for sick people.

A fourth example is that the biggest economic crisis since the Great Depression hit the economy during the period of analysis. If I understand their technical appendix correctly, the authors matched each county in Massachusetts to control counties based on various demographic factors (race, household income, unemployment, poverty rate, percent uninsured, and mortality rate) for the period 2000–2006. But it’s extremely plausible that areas that are very similar in terms of these baseline characteristics would have responded very differently to a huge secular shock like the financial crisis. Where is “percentage of employment in finance” or “percentage of employment by government”? Where is “importance of residential construction to employment?” Where is “role of research universities,” “technology intensity,” and a host of other factors? Differential response to the financial crisis could both cause further changes in migration patterns, and could also cause changes in stress, alcohol consumption, smoking, and everything else that would affect health differently for Massachusetts than for the control counties. 

In practice, I’m confident that exactly the strand of causality that the authors imply — “new law causes somebody to get health insurance that otherwise wouldn’t, leading to a medical treatment, leading in turn to a death avoided in this period” — occurred for at least one person. I’m also confident that at least one extra poor family in good health decided to stay in Massachusetts because the schools improved. I’m also confident that at least one extra person who thought they were headed for illness stayed in Massachusetts because of Romneycare, and then died in this period. I’m also confident that changing relative economic prospects of some industries versus others in the face of the financial crisis caused at least one extra person to move to Cambridge rather than move to New York, and at least one extra person to decide to move from Boston to Florida. 

All of these strands of causality, and many others, happened out there in the real world. It’s a huge bowl of spaghetti. I have no idea what the relative effect of any one of them is. But the idea that looking at the trend in one state and matching by county is going to isolate the causal effect of Romneycare is a real stretch.

Jim Manzi is CEO of Applied Predictive Technologies (APT), an applied artificial intelligence software company.


The Latest