Google+
Close

The Agenda

NRO’s domestic-policy blog, by Reihan Salam.

Why Medicare Transparency Matters



Text  



One of the central features of American governance is the use of private agencies to achieve public purposes. SNAP provides low-income households with vouchers they can use to purchase food from private retailers. Students make use of federal student aid to meet the cost of attending private (and public) colleges. And the Medicare and Medicaid systems reimburse private (and public) medical providers for providing Medicare and Medicaid beneficiaries with medical care. There is much to be said for the use of private agencies to achieve public purposes. In theory, private agencies, whether for-profit or non-profit, are more flexible in terms of how they can deploy resources, thus making them better at meeting the changing needs and demands of the populations they serve.

Yet much depends on how the agencies in question actually generate revenue. If the trick to generating revenue is in tension with the public purpose that the subsidy program is designed to achieve, you’re in for a rocky ride. It is important to create incentives that align public purposes with the private goals of the private agencies tasked with achieving them. This is extremely difficult, in part because public purposes might change over time. SNAP, for example, has done a reasonably good job of reducing hunger, but policymakers have grown more concerned about whether SNAP beneficiaries are purchasing food that will keep them healthy. Federal student aid has increased access to higher education. It also flows to institutions that do a strikingly poor job of ensuring that their students will complete their degrees in a timely fashion, and that their graduates will experience positive educational and labor market outcomes. The most straightforward way to better align incentives is to require that the private agencies that are putting public resources to use are required to disclose information regarding outcomes. Transparency of this kind allows the public sector, and interested third parties, to process information on outcomes to make it more useful and accessible for consumers, taxpayers, and policymakers. This strategy won’t necessarily work. But it’s hard to see how it could hurt.

Of course, not everyone wants transparency of this kind, as we’ve discussed in the context of higher education.

If you know that you’re not doing a terribly good of job of transforming public resources into good outcomes, it is easy to see why you’d want data on outcomes to be impossible to find or access, or at least extremely difficult.

Darius Tahir, writing in The New Republic, points us to a paradigmatic example of this dynamic. The Center for Medicare and Medicaid Services (CMS) has released data concerning how much Medicare has been reimbursing individual doctors for performing various services. Ever since 1979, CMS has been barred from releasing this data due to a federal court ruling. But last year a federal judge overturned the ban, and so the floodgates have been opened. Tahir notes that valuable information is already trickling out:

Initial reporting from the Wall Street Journal indicated that the top 1% of providers accounted for 14% of Medicare billing, with ophthalmologists making up roughly one-third of the top 1,000 billers. A report from the Department of Health and Human Services’ inspector general argued that the agency should scrutinize that specialty more closely, and this data shows why. The New York Times on Wednesday reported that two Florida physicians who had the highest Medicare reimbursements in the country were also generous donors to the Democratic Party.  CMS hopes to encourage more such investigating, and not just from professional reporters. It has sponsored a contest for coders, calling on them to take the data and render it in a form that’s usable and interesting. The winner gets $20,000.

Insurers could use the data, too—utilizing CMS’s data in conjunction with their own to weed out bad actors and potentially reduce payments.

Yet Tahir also quotes University of Michigan law professor Nicholas Bagley, who warned that we shouldn’t get out hopes up in an Incidental Economist post:

CMS hopes the data will “help consumers compare the services provided and payments received by individual health care providers. Businesses and consumers alike can use these data to drive decision-making and reward quality, cost-effective care.” The word choice here—“consumers,” not “patients”—is a cue that CMS wants to enlist market forces to discipline errant physicians. Call it consumer-directed health care, Medicare-style.

There’s reason for skepticism, though. Information disclosure is a common regulatory tool. It’s been studied a lot. And in most settings, it just doesn’t work. Omri Ben-Shahar and my colleague Carl Schneider have recently released a book, provocatively titled More Than You Wanted to Know: The Failure of Mandated Disclosure, that canvasses the demoralizing evidence. (Their earlier article on the same theme is available here.) Nor is it clear that employers and insurers will leverage the data in shaping their provider networks or honing their cost-control strategies. An extensive 2000 review of the evidence about publicly available information on provider quality concluded that “[n]either individual consumers nor group purchasers appear to search out, understand, or use the currently available information to any significant extent.”

I am more optimistic than Bagley, though I don’t discount the value of his sobering work. Disclosure on its own really isn’t all that useful. What needs to happen is that innovative new enterprises need to identify meaningful patterns in the newly-available data that can help patients achieve better outcomes while also containing costs. The cost of computation has plummeted in recent years, as have many of the other costs associated with launching new ventures. This has led to major breakthroughs in data analysis, as  Kenneth Neil Cukier and Viktor Mayer-Schoenberger recount in a recent Foreign Affairs article:

Instead of trying to “teach” a computer how to do things, such as drive a car or translate between languages, which artificial-intelligence experts have tried unsuccessfully to do for decades, the new approach is to feed enough data into a computer so that it can infer the probability that, say, a traffic light is green and not red or that, in a certain context, lumière is a more appropriate substitute for “light” than léger.

Using great volumes of information in this way requires three profound changes in how we approach data. The first is to collect and use a lot of data rather than settle for small amounts or samples, as statisticians have done for well over a century. The second is to shed our preference for highly curated and pristine data and instead accept messiness: in an increasing number of situations, a bit of inaccuracy can be tolerated, because the benefits of using vastly more data of variable quality outweigh the costs of using smaller amounts of very exact data. Third, in many instances, we will need to give up our quest to discover the cause of things, in return for accepting correlations. With big data, instead of trying to understand precisely why an engine breaks down or why a drug’s side effect disappears, researchers can instead collect and analyze massive quantities of information about such events and everything that is associated with them, looking for patterns that might help predict future occurrences. Big data helps answer what, not why, and often that’s good enough.

I’d be surprised if new business models did not emerge to make sense of this new information — not because individual consumers and incumbent insurers will make use of it, but because those businesses that learn how to make use of it will have a significant edge over those that don’t.



Text  


Subscribe to National Review