The “poverty” level is, by definition, pretty much arbitrary. It’s just an income level below which we consider people “poor” and above which we consider them “not poor.” You can argue about how high the income level should be, what mix of basic necessities it should be able to cover, and what should count as “income” in determining whether someone is above or below it (earnings from a job? cash welfare? food stamps? the value of subsidized health care?). You can also argue about how it should change over time: Should it just account for inflation, so someone living right on the poverty line can buy the exact same amount of stuff each year? Or should it keep pace with rising living standards somehow?
It’s pretty hard to justify what we do now, though, which is to adjust it for “inflation” but use an inflation measure — the Consumer Price Index — that pretty much everyone agrees overstates inflation. One big problem with the CPI, for example, is that it fails to account for the fact that when the price of one product rises, people often switch to other products rather than just eating the higher cost. As a result, with each passing year, people who are better and better off fall under the “poverty” level (and various multiples of it) and thereby become eligible for government programs.
Back in May, the Trump administration announced it was exploring alternative inflation measures to deal with this problem. Under a better measure of inflation, the poverty level would grow slightly less each year. And many on the left have been panicking about this possibility ever since.
Senator Bob Casey and poverty scholar Indivar Dutta-Gupta, for example, have this in Politico today, headlined “A cynical way to make poor people disappear.” Naturally, they’d like to see more people considered poor, and they have some alternatives in mind:
Any honest attempt to update the poverty measure should include a wider range of basic needs – including housing, child care, long-term care, health care, access to internet and phone services, transportation and utilities. And that would result in more—not fewer—Americans identified as needing access to programs to ensure a basic foundation for every child and family.
In fact, the Obama administration began to develop that kind of modernized poverty measure. An extensive, interagency process led to the development of the Supplemental Poverty Measure, which quantifies the poverty threshold based on food, clothing, shelter and utility expenses, while taking into account family size, composition and geographical adjustments. The supplemental poverty measure, largely based on a consensus from the National Academy of Sciences, also subtracts necessary medical expenses and work-related expenses, such as child care, from income.
The Obama administration was not alone in attempting to modernize the poverty measure. Several organizations and research centers set out to calculate livable incomes. The National Center on Children in Poverty created the Family Resource Simulator to illustrate the impact of work supports, including income tax credits and child care assistance, offering a more complete picture of how family resources change as earnings increase. NCCP suggests families typically need nearly twice as much as the official poverty level to make ends meet thanks to factors like rent and utilities, child care, health insurance premiums, out-of-pocket medical expenses, transportation, debt and payroll taxes. The Economic Policy Institute’s Family Budgets, MIT’s Living Wage Calculator and the University of Washington’s Self-Sufficiency Standards all came to similar conclusions.
There are other attempts to calculate poverty and changes in it, though, that point in the opposite direction — and go mysteriously unmentioned in the piece. The White House’s Council of Economic Advisers, for instance, recently showed that if you define poverty the way it was defined in 1963, just before we declared “war” on it — and use accurate inflation and income measures to assess changes since then — poverty in the U.S. is pretty much eliminated at 2.3 percent. Consumption-based poverty measures show a similar trend: Very few people consume as little as the poor did half a century ago, and many people are considered “poor” under current measures of poverty despite consuming much more.
Reasonable people can disagree about which of these methods is the best, and in turn about whether the U.S. should be cutting or expanding aid to the poor in general. But if using an objectively better inflation adjustment is a “cynical way to make poor people disappear,” pushing the measure in the opposite direction must be a cynical way to classify people as poor and drive up welfare use, because there’s nothing that makes one set of measures cynical and the other not. They’re all just different ways of answering the questions I laid out at the beginning of this post — about how poverty should be defined and how the definition should evolve over time.