No one wants to be “average,” but when it comes to medicine, being treated as an individual takes on a whole different meaning. Every patient’s treatment should be informed by the best evidence and the best science we have, but at the end of the day the best outcomes depend on a doctor using his or her best medical judgment to help the patient sitting in their office — not an abstract ”average” patient as defined by large studies that are more designed to cut costs than optimize outcomes.
Unfortunately, hundreds of millions of dollars coming from the federal government for something called ”comparative effectiveness research” may be used to slash health-care budgets rather than improve individual patient health. Such a strategy will not only be harmful to patients, it is likely to turn out to be penny wise but pound foolish.
In 2009, President Obama summed up much of the Beltway’s conventional wisdom on health-care reform — that there’s lots of expensive but wasteful spending that should be easy to cut out. “If there’s a blue pill and a red pill,” the president asked, “and the blue pill is half the price of the red pill and works just as well, why not pay half price for the thing that’s going to make you well?”
Why indeed? But when it comes to drug treatment, choices are not always so clear cut. First, most (about 75 percent) of the prescription drugs used in the U.S. today are already cheap generics. And total drug costs make up just 10–12 percent of U.S. health-care spending. Focusing on the cost of the “red pill” may be good politics (since drug companies are the villains du jour), but it’s not likely to result in any substantial savings.
And even when there are good generics available, they won’t be the best option for every patient. Patients vary in how they respond to different treatments (one statin might work for you, but not for me), what side effects they develop, and in their tolerance for side effects. Finding the right blue pill or red pill requires more than just comparing average effects and price tags.
This isn’t to say that we can’t do a much better job of using smart research to improve patient care. Doctors can (and should) use well-designed, evidence-based clinical guidelines to help inform how they treat patients. Large studies that follow many patients over time, like the Framingham Heart Study and the Women’s Health Initiative, have revolutionized the treatment and prevention of heart disease, and sharply reduced the use of hormone-replacement therapy by post-menopausal women.
But using CER studies to pick drug “winners” and drive reimbursement decisions based on the average response of the largest number of patients is likely to leave many individual patients without effective options. This was exactly what health-care researchers Tomas Philipson and Eric Sun found in a new report for the Manhattan Institute:
The potential short-term savings [from comparative effectiveness research] is significant. For example, antipsychotic drugs represent one of the largest and fastest-growing expenses for Medicaid. In 2005, a CER analysis of antipsychotic drugs found little difference between the effectiveness of older, cheaper antipsychotics and that of more expensive “second-generation” drugs. We determined that if reimbursement policies had been changed in response and Medicaid had stopped paying for the more costly drugs, it would have saved $1.2 billion out of the $5.5 billion that it spent on these medications in 2005. However, the consequences of this policy shift would have been worse mental health for many thousands of people, resulting in higher costs to society that would equal or outweigh any savings in Medicaid costs.
This result seems counterintuitive: How can it be that, when a CER study shows no difference between two drugs, limiting coverage for the more expensive drug could actually increase costs? The answer is that in most CER studies, it is the drug or treatment with the larger average effect on an entire population that “wins.” In the president’s hypothetical, the blue pills are “just as effective” as the red ones because, on average, they do as much good for patients. But the average patient is not the same as any particular individual patient. Declaring a treatment most effective based on an average is a medical and an economic error…
Philipson and Sun suggest a different approach. Rather than throw the baby out with the bath water and give up on CER research, they suggest designing CER trials that take into account patient variation (based on age, sex, race, and other demographic information) and collect information on how failure with one therapy can predict success with another therapy.
They also encourage CER researchers to use more observational studies, because they are based on patients’ real-world insurance records and other data that can capture information about patients tolerance for side effects and other factors that can influence whether a drug is effective in the “real world.” This data may be more helpful to physicians than the pristine but artificial information gleaned from traditional randomized clinical trials.
Their approach would replace one-size-fits-all guidelines with nuanced information that can actually help physicians get to the right treatment faster. It’s also likely to get more “buy-in” from doctors; help patients stay on critical medications longer; and keep patients with chronic ailments out of much more expensive hospital or emergency rooms.
Better designed CER research will be more convincing and effective — which is what we should be trying to achieve anyway.