The Census Bureau’s decision to change the way it asks about health-insurance coverage in a key annual survey just as Obamacare is being rolled out is an interesting and important story, but it is not as interesting and important as some people today are suggesting. I think the administration made the wrong decision in allowing the Census to proceed with the change, but I don’t think the move involved a malicious attempt to cook the books.
This looks like one of those many instances in which White House officials confront a no-win situation. If the story in Tuesday’s New York Times had been that the Census Bureau asked the White House for permission to change those health-care questions to make them more accurate and had been denied permission, rather than, as did happen, that the permission was granted, the White House would have been coming under basically the same sort of criticism it is now receiving. Given that there was no avoiding that trouble, the White House should have chosen to keep in place a reliable baseline for measuring changes in the number of insured Americans. Blurring that baseline might have some near-term political advantages for the administration, but it will have significant disadvantages too, and my suspicion is that they actually made the decision they did to avoid seeming to interfere in the work of the Census Bureau, rather than to use that work to their advantage. Clearly that didn’t work. More than anything, though, the story highlights how very little we actually understand about the scope and character of the problem of the uninsured, and how difficult it is to change that.
What follow, for those interested in the minutiae, are some more detailed reflections on the whole fine mess.
So what happened?
On Tuesday, the New York Times reported that the Census Bureau has decided to alter the questions it asks about health coverage in the Annual Social and Economic Supplement of the (otherwise monthly) Current Population Survey. The annual supplement has long been considered the authoritative source on the number of Americans who are uninsured, and so has served as a kind of benchmark for one of the key problems with America’s health-care system. But for just as long, Census officials and health economists have found that the survey systematically overstates the number of uninsured. This has made it particularly useful to people who want to exaggerate the extent of the (undoubtedly very serious) problems with our health-care system, and particularly frustrating for people who want to better understand those problems and their causes.
The fact is that the CPS supplement never should have risen to the status of the trusted source on health-insurance statistics. It did so because the supplement is based on a fairly large sample and is also the source of our annual poverty-rate statistics, and so its publication each year provided a convenient place for a kind of benchmark measure. It also rose to that level because it is extremely difficult to accurately measure the number of uninsured Americans, so that no one can really offer another measure as an authoritative alternative.
The measures we do have provide us with a general picture of the trend line in insurance coverage over recent decades — and it’s not actually the trend many people might expect. Here’s how economist Scott Winship compiled the available data in 2012:
The general trend over time has been massive growth in insurance coverage, but that growth petered out by the early 1990s and even began to reverse a bit in the last decade, and the portion of Americans who are uninsured has stabilized at a level everyone would like to see decreased. The debate we have had has involved, among other questions, why that figure has flattened at that level and what might be done about its underlying causes.
But that general trend is all we really have. The specific data underlying it are not very reliable. And because even its champions project that Obamacare will leave about 30 million Americans uninsured over the long term, the changes we might reasonably expect to see in the next few years would be relatively modest changes on this scale, and so the specific data and measurements we use to make assessments will make a big difference in analyzing the effects of the law. That’s why a lot of people had hoped to be able to use the Census CPS data — since it offers a relatively (though not entirely, as noted below) stable baseline for assessing change.
It is not, however, a great tool for measuring the absolute levels of insurance coverage in America. An annual survey is not an ideal way to measure health insurance coverage, since people become insured or uninsured throughout the year, and the CPS supplement asks people in the spring of each year to report on whether they were insured or not the previous year. Some people answer with regard to the present, some answer regarding whether they were covered at any point in the past year, and some answer regarding the whole year, and no amount of tinkering has seemed to be able to address the problem. What the tinkering (which has been frequent, and was especially significant in 1988, 1995, and 2000) has shown is just how susceptible the results are to various ways of approaching the question, and so how unreliable they are as an overall picture of insurance coverage in America.
In 2000, for instance, the CPS supplement introduced a simple verification question: If people had answered “no” when presented with a list of possible options for different kinds of insurance coverage on the questionnaire, then the interviewer, rather than just note them as uninsured, would say “So does this mean I should record you as uninsured?” They found that an amazing 8 percent of respondents answered “no,” and only in the wake of this verification question (which, for those who answered in the negative, was followed again by a list of insurance options) reported that they were in fact insured. Most of them had private, employer-sponsored coverage. The Census has continued using that verification question since, and its continued effectiveness suggests that the CPS estimates for the uninsured before 1999 were too high by a significant margin.
Even after that change, though, the CPS estimate has remained significantly higher than those of most other surveys, including other Census surveys. The Survey of Income and Program Participation (SIPP), a longitudinal study the Census has carried out each year since 1983, has long suggested that the CPS data consistently overestimates the number of uninsured in America.
And the Census has long been aware of this. In 2006, when I worked in the Bush White House, the Census Bureau raised concerns in the course of one policy process about the way the CPS data on insurance were being relied upon, noting that these data had been systematically overestimating the number of uninsured Americans. An HHS report the previous year had also described that problem. After reviewing the problem, an Office of Management and Budget examiner suggested that the only real solution might be to remove the health-insurance questions from the annual survey altogether and rely instead on other public and private sources for figures.
The Census Bureau considered but ultimately rejected that suggestion, though they have since started highlighting information from sources other than the CPS, and especially the SIPP data. The two are actually often confused now in public discussions about Census figures regarding the uninsured — in fact, the online version of the very New York Times article that broke this story yesterday suffered from this confusion. The article correctly described the annual CPS survey as the one being changed, but the words “annual report” in that description were hyperlinked and the link led to a document that summarized SIPP data instead.
The changes made public yesterday look like another attempt to try to minimize the under-reporting of insurance coverage in that survey. Basically, rather than ask people if they had insurance at some point in the previous calendar year, the new questionnaire will ask people if they currently have health insurance and then will take them through a series of brief questions to determine when they first got that coverage or when they last had coverage, and so at what point in the prior year they were insured or uninsured. Some tests of this new approach last spring brought the results closer to those of other available surveys and so seemed to correct some of the under-reporting of coverage.
The redesign of the questionnaire began several years ago — I gather, in fact, that elements of it began at the end of the Bush administration. But as with much of the work of federal agencies, it has taken longer than anyone expected and its introduction was not very well coordinated within and outside the Census Bureau.
Its introduction has also certainly been related to the rollout of Obamacare, in two ways. First, because the law has created a new category of insurance coverage (exchange coverage, whether subsidized or unsubsidized) the Census has had to adjust its questions to include that option. That adjustment, and indeed the wording of the relevant questions, has surely involved input from the White House and other executive agencies, and appropriately so. The Times reports: “The Department of Health and Human Services and the White House Council of Economic Advisers requested several of the new questions, and the White House Office of Management and Budget approved the new questionnaire.” In my own limited experience, and in the more extensive experience of several former executive-branch officials I talked with today, that is not unusual.
And second, I have no doubt that in the process of proposing new questions and approving a new questionnaire, the political implications of the timing of this change were thoroughly discussed in some senior-level conversations at the White House. Administration officials were surely aware that making this change in the first year of Obamacare’s rollout would be controversial, and that it would make it difficult to compare coverage levels under Obamacare with those prior to 2013—leaving us with only one year of comparable pre-Obamacare numbers.
There’s no question this could have some benefits for the administration. It could have some drawbacks too, because it’s clear that different kinds of questions get stronger responses from people with different kinds of coverage, and there has never been a test of how these new questions work with people covered in the exchanges (since those haven’t existed until this year). But either way, I think it’s very important to clarify the role the White House would have played in this process: The Census Bureau was in the process of carrying out this change, the White House would have been in a position to prevent or delay it, but was not, as I now understand it, the moving force behind either its initiation or its timing. To me, this suggests that the problem is not that the White House intervened in this process, but that it didn’t intervene.
It should have. The fact is that as the evidence phase of the Obamacare debate slowly begins in earnest, it would be very useful to have some baseline against which to measure effects on insurance coverage, and the CPS survey, for all its serious flaws, at least offered a relatively stable baseline—even if it didn’t offer an actually accurate figure. If we have to choose between the point and the trend, this is a time to pick the trend, regardless of what you think of Obamacare. The Census Bureau (and, failing that, the administration) would have been wise to think of the CPS data as a baseline for measuring change and left it alone for a few years.
It’s also far from clear that the gains in accuracy with these new questions will be all that great anyway. Asking people to remember in which month they were and were not insured last year would seem to me to add a new source of errors atop those already there, particularly in a period marked by unusual displacement and disruption in health coverage, much of it caused by Obamacare.
But while I think their decision to proceed with the change was unwise, I don’t think it was nefarious or inappropriate, or that it will prove particularly helpful to the administration’s cause. I suspect it was actually moved a fair bit by a desire to avoid exactly the kind of criticism it has ended up receiving anyway — criticism about political interference in the work of the Census Bureau.
Here, as ever, I would recommend a simple rule for thinking about government, regardless of which party is in power: Don’t attribute to malice what can be adequately explained by incompetence.
Malicious machinations surely do happen, if never quite at House of Cards levels. But they are far more rare than cynics of both parties believe, and they are almost never carried off successfully — in no small part because of incompetence. Modern government is much too complicated to be run very well, and this frustrates most attempts at malicious conspiracy. Of course, it also frustrates attempts to effectively manage a sixth of the economy from Washington.