Google+

The Agenda

NRO’s domestic-policy blog, by Reihan Salam.

The Bitcoin Romantics



Text  



Though Jerry Brito sees great potential in Bitcoin as a value transfer system, he’s profoundly skeptical that fixed-supply commodity money like Bitcoin can (or should) replace a relatively well-managed fiat currency like the U.S. dollar. He does, however, see it as a way to weaken the grip of authoritarian governments:

Because it is decentralized and relies on no third parties, Bitcoin has the ability to bypass capital controls. If you can convert your wealth into Bitcoin, you can get it out of your government’s reach. That is revolutionary. You can then keep that wealth in bitcoins, or convert it to dollars or euros or whatever else you want. This is Bitcoin’s true contribution to “monetary justice.”

Yes, fiat currencies are susceptible to abuse, and while I personally don’t see inflation as the most pressing issue confronting Americans today (especially not from a justice perspective), I can understand that others might. Yet as much as we might want it to be, Bitcoin is not going to be a solution to inflation. Let’s focus our energies instead on pursuing Bitcoin’s strengths: it can make the world better by fostering financial inclusion and by helping the oppressed escape control and censorship. On this, I hope, we can all agree.

And in a similar vein, Eli Dourado addresses the network externalities of cryptoanarchy:

If Bitcoin ever becomes widely adopted, it will change power relations not only between the state and the conscientiously libertarian. No, it will reduce the effectiveness of financial prohibitions between any two parties, regardless of their political views. Whereas exit only secures freedom for the one who is leaving, a cryptocurrency that is successful in the long run will impose a measure of freedom even on those who don’t want it.

There is a word for a change that imposes a radically new political reality on everybody, whether they want it or not. That word is not exit; it is revolution. Revolutionary freedom-advancing technologies have succeeded before. Containerization increased the elasticity of the supply of capital, and packet-switching moved telecom innovation from the core of the network to the edge. The (liberal) changes wrought by these technologies have taken decades to unfold, and the world is still reeling from them. Not everyone is happy about the changes, but they stick with the technologies anyway, because they are both economic and entrenched.

Cryptocurrency is potentially in this class of technologies. If Bitcoin is going to provide any long-run freedom for anyone, it will have to succeed on these terms, not on the basis of cryptoanarchist zeal alone. There is no exit here, nothing has been won, but seeds of revolution have been planted.

While Brito focuses on the ways that Bitcoin can help limit abuses of power, particularly in authoritarian countries like Argentina and Venezuela, Dourado argues that widespread adoption of Bitcoin could lead to a structural shift that would limit the regulatory reach of even fairly liberal market democracies.

Just as Henry Farrell and Martha Finnemore have argued that the Snowden revelations represent “the end of hypocrisy,” as new technologies undermine the extent to which the United States can conceal the workings of the surveillance state, Bitcoin could, in theory, represent “the end of crony capitalism” as an alternative financial services economy emerges. Of course, the dynamics Farrell and Finnemore describe are firmly entrenched while those identified by Brito and Dourado are still in their infancy.

Why We Should Talk About Work and Family Stability



Text  



Matt Yglesias of Vox points us to the new World Family Map from Child Trends. Specifically, he highlights the fact that the U.S. is not an outlier when it comes to the share of births to unmarried mothers:

There are a number of other findings that are worthy of note. Consider, for example, the share of children under the age of 18 in households in which the household head is employed:

Americans like to think of the U.S. as a work-friendly country. Yet a far smaller share of American children (71 percent) are raised in households in which the household head is employed than in Sweden (90 percent) or Canada (90 percent) or Germany (81 percent) or France (88 percent). I should note that the data I’m drawing, from the Luxembourg Income Study by way of Child Trends, is not perfect, as it uses 2010 data for the U.S., Ireland, and Germany while using 2005 data for Sweden and France. One assumes that Ireland’s shockingly low level of parental employment reflects the severity of its economic downturn. As we’ve discussed, work hours in Sweden have surpassed work hours in the United States, and so it is easy to imagine that parental employment in Sweden would continue to surpass that of the U.S.

If you believe that having a working parent is good for children, as it makes it easier to socialize them into the world of work, the fact that parental employment levels in the U.S. are so low should be a matter of concern.

Moreover, there are other dimensions of family structure that merit close attention. One of them is union stability. Yglesias cites that share of children born to unmarried mothers, but nonmarital relationships vary across societies, with non-marital unions in some societies — particularly northern Europe — being more stable than non-marital unions in others, like the United States. “Although children born to cohabiting parents are more likely to experience separation than children born to married parents,” write Sheela Kennedy and Elizabeth Thomson in a 2010 survey of family disruption in Sweden, “the difference is smaller in Sweden than in any other country for which we have data.” (Interestingly, the gap in union stability between cohabiting parents and married parents is much larger among less-educated Swedes than it is among their educated counterparts. This class difference in union stability has increased since the 1970s.)

In “Childbearing across Partnerships in the U.S., Australia and Scandinavia,” Elizabeth Thomson, Trude Lappegård, Marcia Carlson, Ann Evans, and Edith Gray observe that the United States is in a class of its own when it comes to the share of women whose second birth is with a different father from the first. They estimate that the U.S. proportion is 27 percent of all second births while the Swedish proportion is 12 percent of all second births, with Norway and Australia between the two, but closer to Sweden than the U.S. This partly reflects the fact that first births in the U.S. occur disproportionately to young mothers, with one-third of births attributable to teenagers in the U.S. as opposed to less than 15 percent in Norway, Sweden, and Australia. The U.S. teen birth rate is falling, as the Pew Research Center reports, but there are pronounced differences across ethnocultural groups, with Hispanic and non-Hispanic black teenagers having notably high birth rates:

As a general rule, children raised in households that experience union stability fare worse than those raised in households that do not. Childbearing across partnerships can also introduce complications, including conflict over the distribution of resources across children, which suggests that American children are burdened in ways that Scandinavian children are not. When we also factor in the fact that parental employment levels in the U.S. are much lower than in other market democracies, we start to get a troubling of what it’s like to grow up in many American communities. When we try to understand how American children fare when compared to their counterparts in other market democracies, it is important to keep all of these factors in mind. Rich Lowry and I have argued that American conservatives should be “the party of work.” Though nonmarital childbearing is a problem, it is but one of several problems facing American children, and it is arguably a smaller problem than the low level of parental employment and union instability. The American family is changing in ways that make talk of “illegitimacy” less convincing that it might have been in earlier eras — there is a danger that conservatives who focus on the dangers of non-marital childbearing are preaching to the choir, and alienating Americans for whom union instability and unemployment are serious and pervasive problems. Marriage is vitally important. But conservatives would do well to focus on the virtues of work and family stability just as much, if not more, as they do on the virtues of marriage.  

ADVERTISEMENT

The Most Socialist Aspects of the Military Are No Paradise



Text  



Over on the Corner, Jonah makes a number of key points about the bizarre bit of liberal trolling — recently indulged in by Jason Siegel of the Daily Beast, that the military is a “socialist paradise.” Yes, in important senses, it is, and that’s because warfighting is nothing like civilian life.

But I’d also point out that some of the U.S. military’s socialist attributes are also its worst: The military’s compensation and personnel structures are bloated and nonsensical, and drive a lot of our best officers out of service altogether (an issue Reihan has discussed in this space). Siegel praises, for instance, the system of military commissaries with below-price goods sold to soldiers on base, which is a hugely inefficient fringe benefit most military experts think should be eliminated.

And as we all know, the military’s procurement and technology systems, while we spend enough on them to get them to work, come at exorbitant cost. It’s not easy to see a way around this in a lot of cases, but one of the less socialist parts of the military, the Special Operations Command, bypasses a lot of typical military procurement procedures and puts together technologies from the private sector, developing advanced capabilities at remarkably low prices. (That’s the same SOCOM that’s a favorite of President Obama and Secretary Hagel.)

Last, on the other extreme, there are even more socialist militaries than America’s — like China’s. And for all the strength China has gained militarily of late and the threat it poses in the Pacific, “party armies” — those run by socialist states — are notoriously incompetent because, like the rest of a socialist state, they are easily distorted for private benefit.

(I see that one of Jonah’s Twitter followers, blogger Bryan McGrath, just made a similar point: that a lot of members of the military prize these benefits and don’t recognize that they’re both a bad idea and in some ways, yes, socialist. Fixing the weird ways we do military compensation will mean upending existing arrangements, but it should ultimately serve soldiers better.)

Sorry Slate, There’s Still No Such Thing as Federal Student-Loan Profits



Text  



Slate’s Jordan Weissmann, whose work I normally enjoy, has written a column analyzing something that does not exist — namely, the “profits” earned by the federal government on student loans. As I’ve explained in more detail elsewhere, federal student loans appear to earn a profit only because the government disregards the market risk associated with expecting future repayments. Fair-value accounting (FVA) would incorporate the cost of that risk.

Arguments against FVA usually amount to special pleading. Weissmann’s attempt is a case in point:

Conservatives will argue that the government’s student loan profits are an illusion created by bad accounting. The debate is long and technical, but their basic qualm is that when the government calculates its expected returns, it misleadingly chooses not to factor in all of the same risks that a bank would, such as a possibility of a downturn in the entire market. But the feds do not actually face the same risk as a private investor, because, among other things, it doesn’t have to worry about going bankrupt. Pretending that it does would only inflate the on-paper cost of its lending programs.

But why do the feds not worry about going bankrupt? Because the government can raise taxes when it needs money. So the government does not make risk disappear, it merely passes it on to the taxpayers. Put another way, government ownership does not confer a free lunch in lending markets, it just makes taxpayers responsible for the check.

Think about it this way: If the government really could eliminate market risk, then it should buy up every outstanding private-sector loan — for tuition expenses, houses, cars, credit cards, everything — all at market prices. The loan values supposedly increase once the government owns them, so the purchases will be scored as a huge revenue gain!

And if we’re going to disregard market prices, why stop with loans? The government could also make money by purchasing a warehouse full of straw and declaring the straw to have the same value as gold. I call this Rumpelstiltskin’s First Law of government accounting.

In all seriousness, FVA should not be a partisan issue, as Weissman seems to suggest it is. The CBO has endorsed it, for example, and so has the Washington Post editorial board. It’s a matter of objective accuracy about the government’s financial position.

Yes, conservatives are naturally attracted to an accounting system that reveals the full cost of government programs, but partisans have actually switched sides on FVA in the past. I’m currently working with New America’s Jason Delisle on a long article showing how politicians from both parties have historically accepted or rejected FVA depending on how it impacts their favorite programs.

Gallup: Half of the Newly Insured This Year Came from the Exchanges



Text  



Four percent of Americans tell Gallup they gained health insurance this year, and about half of them say they did so through the Affordable Care Act’s health-insurance exchanges — but that’s a net number, excluding the people who lost insurance for whatver reason. The questions were asked as part of Gallup’s daily tracking poll from March 4 to April 14, encompassing 20,000 Americans.

About 12 percent of people said they got a new insurance policy in 2014, and Gallup says the drop is basically in line with their assessment of changes in the uninsured population. They’ve seen it drop over the last few months, from 18 percent to about 16 percent (this works out since surely some substantial number of Americans lost insurance – 4 percent newly insured, 2 percent newly uninsured).

Gallup’s data point is interesting, but because the sample wasn’t focused on the newly insured, their numbers actually could be way off. Indeed, there has to be something amiss: By their reckoning, 12 million American adults gained insurance this year, which isn’t out of the realm of possibility. But 6 million Americans became newly insured via the exchanges, which doesn’t seem possibly, since there were only about 7.5 million enrollees, and indications are that many enrollees were previously insured. Maybe not most, but more than 20 percent of them. The problem with Gallup’s survey isn’t that it isn’t huge — it’s big enough to have a margin of error of just 1 percentage point — but that we’re looking at a very small subset of it. The margin of error of 1 percentage point could mean that we can only be 95 percent confident that somewhere between 3 and 5 percent of Americans gained coverage, and that as few as 3 million Americans became newly insured through the exchanges. That would be more in line with what other sources have reported, though still pretty high.

The breakdown of newly insured Gallup found tilts low-income; presumably many of the people who got insurance without the exchanges are getting it through Medicaid. They also look marginally sicker than average:

Though an Obamacare death spiral almost surely isn’t going to happen, enrollees don’t need to be much sicker to force premiums up in the future — though that depends on whether insurers expected their enrollees to be sicker than average in the first place.

ADVERTISEMENT

Our Federal Tax System Explained, in Charts



Text  



In case you want to take a break from doing your taxes to learn about taxes, a brief, graphical rundown of the American federal tax system.

Feel like you pay a lot in taxes? You might, but you probably don’t pay a bigger share of your income in federal taxes than people who earn more than you, and people who earn less than you probably pay a smaller share of their incomes (this makes the system “progressive,” “flat” would be everyone paying the same share, “regressive” poor people pay more of their income than the rich).

That’s what we see from looking at the effective federal tax rate for Americans in each income quintile (the lowest earner to the 20th percentile, etc.) and at the very top of the income scale. As you can see, the system works quite progressively — the poor don’t have to contribute much at all, and the rich don’t get off without paying their share (chart via the Tax Policy Center).

On the other hand, that chart does overstate the progressivity of the whole tax system, because state and local taxes tend to be more or less flat, and some are regressive (sales taxes are regressive; state and local income taxes tend to be pretty flat — I pay nearly the same marginal New York City income tax rate as Michael Bloomberg).

Within the top quintile, the system remains progressive, through the top 1 percent of income earners, though rates do not ramp up dramatically on the Mitt Romneys of the world, since the top tax bracket began at $250,000 for families (when this data was gathered — it’s now $450,000). And since the above rates are just averages, some rich people really do pay lower rates than the middle class, and some people in the middle class do pay remarkably high tax rates, but this is pretty rare, regardless of what Chrystia Freeland is allowed to claim in the pages of The New Yorker.

Keep reading this post . . .

How Poor Are America’s Poor?



Text  



Andrew Prokop of Vox observes when we look at figures for 2000, we see big differences between the share of children in single mother families living in poverty across a few select countries — the U.S., Finland, Norway, and Sweden. (With apologies to our friends at Vox, I’ve borrowed the chart below, which Prokop borrowed from the think tank Demos.)

This comparison is dated, to be sure, and it could be that the U.S. position has deteriorated over the intervening years. But it is worth noting that in terms of “predistribution,” i.e., when we consider the distribution of income before taxes and transfers, the U.S., where union density is notably low and where the political culture has tended to be more neoliberal and less laissez-faire than in the Nordic countries, where union density is notably high, is as close to its Nordic counterparts as it is. One interpretation is that solidaristic wage bargaining did not do as much to lift relative market incomes at the bottom of the income distribution in the late 1990s as we might have expected. On the other hand, it seems likely that higher union density contributes to the volume of redistribution via the tax and transfer system. 

Yet it is also worth keeping me in mind that we are contrasting relative poverty levels across countries in the chart above, with poverty defined as incomes 50 percent or below the median income, rather than absolute incomes. This raises an interesting question: what if median incomes are substantially higher in some of these countries than in others? (For more on this contrast, I recommend reading Nicholas Eberstadt’s worth on the U.S. poverty rate, which he briefly summarized in 2010. I also recommend University of Arizona sociologist Lane Kenworthy’s important work on low-end incomes.) 

In “Income Inequality in Richer and OECD Countries,” Andrea Brandolini and Timothy M. Smeeding and report real equivalized incomes in 2000 across countries. Real income at P10 (the tenth percentile), P90 (the ninetieth percentile), and the median (the fiftieth percentile) are computed as a fraction of the U.S. median income, thus giving us a helpful portrait of absolute income differences. 

  United States Finland Norway Sweden
P10 39 39 50 39
P50 100 70 89 68
P90 210 114 141 114

Note that I am not comparing apples to apples. This chart compares disposable incomes across low-income, median-income, and high-income families; it does not tell us anything about single mother families as such. It is, however, suggestive. Brandolini and Smeeding define disposable income  as market income less direct taxes, including employee’s contributions to social insurance, while ignoring indirect taxes (like property, wealth, and value added taxes); it adds back cash transfers, including cash and near-cash public income transfers (for social retirement, disability, and unemployment), universal social assistance benefits, and targeted income transfer programs. They include food stamps, housing allowances, and in-work benefits like the EITC. The idea is that we’re getting a portrait of the money households have at their disposal after taxes and transfers. And what is really interesting is that even after Finland and Sweden do so much to raise P10 household incomes, disposable incomes were roughly the same across these three countries at the low end. Norway, a petrostate with a population of 5 million that is among the world’s weathiest countries, is in a category by itself, where disposable incomes at P10 are quite high and incomes at P50 and P90 are surprisingly low. I’ll leave aside the question of whether or not higher disposable incomes at P50 and P90 are a good thing that we shouldn’t dismiss out of hand. 

Brandolini and Smeeding offer a number of important caveats for those inclined to read too deeply into these numbers. Real disposable income is useful, but important goods and services like medical care and education are provided in different ways across different countries, and so real disposable income might not offer a comprehensive portrait of what a given level of real disposable income actually buys. Low-income citizens in some countries have to pay more in out-of-pocket costs to access these services than in others. 

So allow me to share another chart that might be of interest. Back in January, Slate.fr published a fascinating chart (which, once again, I’m borrowing) reports the results of a French cross-national survey on access to medical care. The question, for those of you who, like me, can’t read French, is as follows: “During the past year, have you or a member of your household been obliged to forgo or defer healthcare treatments due to financial difficulties?” (The data is from the pre-Obamacare era; the survey was conducted by Baromètre Santé et Société Europ Assistance-CSA 2013.)

At first glance, the really impressive thing is that so few Swedes, and so few Britons and Spaniards, report that they chose to forego healthcare treatments due to financial difficulties, which reinforces the case that low-income Swedish households are perhaps better off than their disposable incomes would lead you to believe. (Britons and Spaniards often complain about rationing, waiting lists, and denials of care that often accompany free-at-the-point-of-use systems, and so I wouldn’t count these systems as quite so successful.) 

It is interesting, however, that the U.S. is not an outlier when compared to European countries; it is clustered with a number of countries that offer universal coverage in which cost-sharing plays a substantial role, and in which rationing is less a feature of the landscape. 

So a few concluding thoughts: (a) it is useful to think about absolute household incomes as well as household incomes in relative terms; (b) the quality of public services matters a lot, and Sweden, a country where (for example) the market for education services is much freer than it is in the U.S. seems to do a pretty good job of offering high-quality public services; and (c) it turns out that universal coverage does not mean that households no longer face financial difficulties when it comes to securing medical care, as we see in countries like Germany and France with relatively well-regarded health systems.

Many Americans romanticize European social models, and this in turn leads them to embrace public policy solutions that aren’t a good fit for the particular challenges and demands that obtain in the U.S. My view is that the particular challenges facing the American poor are extremely complex. They reflect social isolation as much, if not more, than they reflect economic deprivation as such. Social isolation is about more than family structure: it is about geography, racialization, and history, and problems that are about more than money require solutions that are about more than money. Some increases in redistribution might be appropriate, if well-designed, and I favor increases in in-work benefits, among other things. But spending levels are only part of the picture.

The Individual Mandate Isn’t Dead, in One Paragraph



Text  



From the CBO’s updated estimate of Obamacare’s effect on insurance coverage, which was released today:

Here’s how that breaks down, revenue-wise, over the next ten years:

There was plenty of talk about how the individual mandate has been weakened by the Obama administration — and it’s weak as designed, basically — but it’s not going away. Lots of people are going to pay it, and even if they weren’t quite made aware of it to take it into account when considering insurance this year, it’s increasing in future years. People won’t necessarily have “paid” it by the time they make a decision about whether to buy insurance next year, though, because it’s going to be taken out of their federal tax refunds. Open enrollment ends February 15 of next year, while federal taxes aren’t due for two more months. Only people who file their taxes relatively quickly will have experienced the mandate charge before open enrollment ends.

Could It Be That Women Are Making Better Occupational Choices Than Men?



Text  



There is good reason to believe that the pay gap between women and men is largely a function of different preferences (with respect to working hours, the tradeoff between wages and benefits, the work-related dangers posed by different occupations, and flexibility, among other things). But is this the end of the conversation or just the beginning? One could argue that the real problem we face is that women and men have such different preferences, as these preferences are shaped by social pressures that, for example, lead women to devote more time to raising children than men. Megan McArdle recently addressed this thesis in Bloomberg View, and now Evan Soltas, also writing in Bloomberg View, has done the same. While McArdle concludes (correctly, in my view) that there is relatively little the government can do to address the sources of the residual wage gap short of going “inside families, or the subconscious recesses of our minds.” Soltas, in contrast, believes that conservatives are choosing “to ignore lingering inequity,” and that even if they reject Democratic proposal to widen the scope of anti-discrimination lawsuits, they must “figure out an agenda to advance women in the workplace.”

Soltas contributes to the debate by observing that women are actually more underrepresented in certain industries, manufacturing and information, than they had been in the past. Intriguingly, he notes that while women represented 32 percent of the manufacturing workforce in 1990, their share had fallen to 27 percent as of last month, the lowest share since 1971. And in the information sector, the share of women in the workforce has fallen from 49 percent to 40 percent over the same interval. In Soltas’s view, this shift demonstrates that the notion of “men’s jobs” and “women’s jobs” has reasserted itself, and that the substitution of capital for labor in fields like manufacturing and information is “pushing out women more so than men.” (Soltas also briefly addresses inflexible working hours in high-end occupations, a subject we’ve discussed in this space.)

I was hoping that Soltas would offer further thoughts on why the substitution of capital for labor would push out women more than men, as it seems entirely plausible that it would have the opposite effect. The manufacturing sector placed a greater emphasis on physical strength in earlier eras than it does in its current, more heavily-automated incarnation, and you’d think that this would improve the relative position of women seeking employment in factories. So what could be going on?

Manufacturing and information (e.g., computer engineering, telecommunications, and traditional publishing) are both part of the tradable sector, and employment levels in the tradable sector have grown only modestly since 1990 while employment in the nontradable sector has increased considerably. Moreover, there has been a great deal of dislocation within the tradable sector as workers have shifted from manufacturing, which lost 6 million jobs from 2000 to 2009, to other sectors. It could be that women have chosen not to sort into manufacturing in larger numbers because they recognize that manufacturing employment is vulnerable to offshoring, not because of the reassertion of the social concept of men’s jobs and women’s jobs.

In “Is White the New Blue? The Impact of Gender Wage and Employment Differentials of Off-shoring of White-collar Jobs in the United States,” the economists Ebru Kongar and Mark Price found that after dividing white-collar service occupations between those at risk of being offshored and those that were not, low-wage women’s employment decreased in the at-risk occupations, which in turn led to an increase in the average wage of the women who remained. Kongar and Price were assessing the period from 1995 and 2005, and it is entirely possible that much has changed since then. But their findings point us in an interesting direction: it could be that women are more likely to remain in an at-risk sector if they are earning a higher wage, perhaps because it is the higher wage jobs that are less vulnerable to offshoring or automation, or because the higher wage compensates for the risk that the job might eventually vanish.

Could it be that it’s not so much sexism that accounts for the declining share of women in the manufacturing and information workforces but rather some difference in underlying risk preferences? In their 2011 paper on “The Evolving Structure of the American Economy and the Employment Challenge,” Michael Spence and Sandile Hlatshwayo divide the economy into tradable and nontradable sectors, and though they don’t explicitly address gender, it is noteworthy that of the large nontradable industries — government, health care, retail, accommodation/food service, and construction — construction is the only one where women don’t represent at least half of the workforce as of 2013 (and I’ll note that with the rise of modular construction, it’s not difficult to imagine that construction will at some point become a tradable sector); and that year, women represented 79.6 percent of the health care workforce.

Even if there is on average a female preference for stable employment in the nontradable sector over less-stable employment in the tradable sector, and I’m not at all certain that this is the case, one could argue that this risk-aversion reflects a legacy of discrimination. But it could also reflect good judgment. So while I am sure that there is lingering inequity in the labor market, it’s not clear to me that the declining female employment share in manufacturing and information are a clear instance of it.

Equal Pay Would Boost GDP by 9 Percent? Dissecting Another Bogus Wage-Gap Claim



Text  



The recent debate over the pay gap between men and women, sparked by President Obama and congressional Democrats’ PR push this week, reminds me of a related controversy. When federal compensation first became a hot-button issue a few years ago, Republican politicians and allied media would point out that the average federal salary was roughly double the average in the private sector.

“Nonsense!” was the collective reaction of the Obama administration, federal employee unions, liberal intellectuals, and the mainstream media. “Federal workers are more educated and experienced than the average private-sector worker,” they said. “It’s apples-to-oranges.”

And they were right. That’s why Andrew Biggs and I conducted federal-private pay comparisons that tried to control for as many relevant factors as possible. The ensuing debate focused on the extent to which we had succeeded in creating an apples-to-apples comparison, not on the appropriateness of attempting a controlled comparison in the first place.

But now left-leaning commentators — including the president himself — eschew controlled comparisons of men’s and women’s earnings. They implicitly attribute the entire wage gap to discrimination. When pressed, they say controls can’t explain all of the difference, and then they go on citing the raw difference as if it meant something.

Unfortunately, unchecked bogus claims will spawn more bogus claims. I ran into a couple today, courtesy of the Anti-Defamation League (ADL). Here’s how the ADL introduces the pay issue (emphasis added):

April 8 is Equal Pay Day, marking the number of days the average woman has to work into the new year to earn what a man in an equivalent job earned in the last calendar year alone. Normally it’s not a day to celebrate. Instead, it serves as a stark reminder that women in the United States still earn only 77 cents for every dollar a man receives.

That’s just plain misinformation. There is nothing about “equivalent jobs” in the 77 cents calculation (see table 1 and figure 2). It’s a straightforward comparison of median full-time earnings for men and women, regardless of the jobs they hold.

The ADL continues:

That fact is morally and socially unacceptable. But it is also economically foolish: the World Economic Forum has said that if women’s pay equaled men’s, the U.S. GDP would grow by nine percent.

That struck me as an odd claim. If women earn less because of discrimination, then paying them more wouldn’t increase GDP — it would just redistribute it.

Keep reading this post . . .

Why Medicare Transparency Matters



Text  



One of the central features of American governance is the use of private agencies to achieve public purposes. SNAP provides low-income households with vouchers they can use to purchase food from private retailers. Students make use of federal student aid to meet the cost of attending private (and public) colleges. And the Medicare and Medicaid systems reimburse private (and public) medical providers for providing Medicare and Medicaid beneficiaries with medical care. There is much to be said for the use of private agencies to achieve public purposes. In theory, private agencies, whether for-profit or non-profit, are more flexible in terms of how they can deploy resources, thus making them better at meeting the changing needs and demands of the populations they serve.

Yet much depends on how the agencies in question actually generate revenue. If the trick to generating revenue is in tension with the public purpose that the subsidy program is designed to achieve, you’re in for a rocky ride. It is important to create incentives that align public purposes with the private goals of the private agencies tasked with achieving them. This is extremely difficult, in part because public purposes might change over time. SNAP, for example, has done a reasonably good job of reducing hunger, but policymakers have grown more concerned about whether SNAP beneficiaries are purchasing food that will keep them healthy. Federal student aid has increased access to higher education. It also flows to institutions that do a strikingly poor job of ensuring that their students will complete their degrees in a timely fashion, and that their graduates will experience positive educational and labor market outcomes. The most straightforward way to better align incentives is to require that the private agencies that are putting public resources to use are required to disclose information regarding outcomes. Transparency of this kind allows the public sector, and interested third parties, to process information on outcomes to make it more useful and accessible for consumers, taxpayers, and policymakers. This strategy won’t necessarily work. But it’s hard to see how it could hurt.

Of course, not everyone wants transparency of this kind, as we’ve discussed in the context of higher education.

Keep reading this post . . .

What to Do About Russia’s Weapons Development Push



Text  



In March of 2012, President Obama met with his then-counterpart, Russian President Dmitry Medvedev, to discuss a number of issues, including European missile defense. The meeting was memorable primarily because an open mic caught a candid exchange between the two heads of state, which David Nakamura of the Washington Post recounted in detail. Obama sought to reassure Medvedev that “on all these issues, but particularly missile defense, this, this can be solved, but it’s important for him,” meaning incoming Russian president Vladimir Putin, “to give me space.” Medvedev was receptive — “I understand your message about space. Space for you …” — and Obama went on to make himself even more explicit: “This is my last election,” he explained. “After my election, I have more flexibility.” And Medvedev promised to “transmit this information to Vladimir.” 

But how will the president use the flexibility he has gained by virtue of his reelection? Among conservatives, the prevailing view is that the Obama administration will use this flexibility to weaken the U.S. strategic position vis-à-vis Russia. Yet the president could also use his flexibility, and Russia’s blundering intervention in Ukraine, to strengthen the U.S. position by revisiting the Intermediate-Range Nuclear Forces (INF) Treaty.

This idea came to mind as I read Elbridge Colby’s new Foreign Affairs article on Russia’s apparent decision to flout INF, which bans the use of missiles, whether armed with nuclear or conventional weapons, that operate in the range of 500 to 5,550 kilometers. Though we don’t have conclusive evidence, Colby claims that we do have reason to believe that Russia has developed a cruise missile prohibited by INF and that the Russian military “has been keen to escape the INF straightjacket for years.” As Colby reports, a number of conservatives see Russia’s (alleged) violation of INF as an opportunity to abandon it, a position endorsed by National Review. In February, the editors made the case that INF raised the risk of conventional war in Europe in the waning days of the Cold War, and they note ominously that while the INF prevents the U.S. from developing intermediate-range weapons, it does nothing to prevent other states, including North Korea and Iran, from doing so.

Colby sees the INF Treaty as valuable insofar as it prohibits Russia from deploying missiles that could reach U.S. allies in Europe and East Asia, as well as other states within intermediate range of Russia’s borders. This is far more constraining for Russia than for the United States, as the U.S. can rely on its aerial and naval assets for its military striking power. Yet Colby also acknowledges that the U.S. might have some use for intermediate-range missiles:

On the one hand, INF does not endanger the United States’ current military effectiveness; U.S. forces can launch accurate strikes from air and sea and can operate drones as needed. On the other hand, INF does prohibit the United States from exploiting at least some attractive options to fill holes in its military posture. Some experts, including Jim Thomas, the vice president of the Center for Strategic and Budgetary Assessments, argue that such systems would fill a rather large gap in the United States’ ability to strike quickly with accurate and effective conventional weapons. Likewise, the Department of Defense has reportedly identified a number of major unmet requirements in the prompt conventional strike mission that at least some in DOD think could best be met through INF-prohibited systems.

This argument has special force because U.S. conventional strike capabilities — and thus the United States’ ability to project power writ large — are under increasing strain. At the broadest level, this stems from the fact that the world is witnessing the unveiling of daunting anti-access/area denial (A2/AD) networks by China and Russia and, increasingly, North Korea and Iran. These networks are designed to blunt American military power and include highly sophisticated air and missile defense systems tailored to block the United States’ preferred ways of operating and striking. These networks will create increasingly, and in some cases dramatically, more challenging environments for U.S. forces.

At the same time that the defensive challenge to U.S. strike capabilities is growing, however, the U.S. conventional strike arsenal is shrinking and aging. For instance, a number of the key weapons and systems that underpin U.S. military supremacy are set to retire, and it is uncertain what will replace them. For example, the United States will soon phase out its Ohio-class SSGNs, which are the stealthy ballistic missile submarines converted to carry and launch conventional cruise missiles. The United States depends heavily on these submarines, each of which carries well over 100 cruise missiles. Yet they are scheduled to be gone by the end of the next decade, and have no clear replacement.

And so he concludes that rather than abandon INF outright, the U.S. and its allies ought to (a) devote more research and development effort to understanding the potential uses of INF-banned systems; (b) punish the Russian government if it is indeed violating INF; and (c) craft a fallback option that might replace INF’s outright bans with less-stringent limits on missile development and deployment.

The Simple Misconception that Keeps People from Recognizing How Big a Scam Film Tax Credits Are



Text  



Tim Cavanaugh reports on the homepage that House of Cards is still threatening to leave Maryland if the state doesn’t fund an expansion of its production-tax-credit program — a program, which most states now have, to provide tax benefits to film- and TV-series-production companies.

The empirical economic evidence against such programs is really solid — Tim cites, for instance, research by the notorious Hayekians at the Center on Budget and Policy Priorities. But who trusts that kind of statistical argument, which can be incredibly unreliable?

Well, the theoretical argument against these programs is equally unassailable. The problem is that people often fall prey to explanations like the one Roger Manno, a Maryland state senator who wants to fund an expansion of the credit, gave to Tim when asked about studies showing a bad return on investment for the taxpayer:

Whatever the percentage is, it’s a percentage of an industry that wasn’t in Maryland before the incentive was there. We have determined that they make good economic sense for us. It’s not just sexy. I don’t think any of us are wooed by the shows. It’s kind of neat that they’re here. But we have to make a budget. It has to make economic sense.

I saw a couple pundits on Red Eye a while ago defend the idea of show-biz tax credits on these very grounds — that the program generates economic activity (“an industry that wasn’t in Maryland before the incentive was there”), so the state can certainly afford to give them back some of the taxes they pay, whatever percentage of the taxes it was. Even if you give the producers a 100 percent tax break, and get no tax revenue from them at all, they spend money elsewhere and those businesses generate tax revenue. The economy’s healthier, and the taxpayer comes out ahead. Right?

Wrong. A production tax credit doesn’t just reduce what producers pay in taxes. Rather, it means Maryland basically writes a check every year to the producers equal to 25 percent of the cost of making a series or movie.

In order for Manno’s precious budget to make up for what they’ve just paid the producers, the series has to drive many more times economic activity in the state than the series’ production itself constituted. The production company may pay a little in taxes itself, but you probably need the filming operation to create four or five times the show’s production costs in additional economic activity for taxpayers to come out ahead.


House of Cards viewers will know whether this is about sex or budgets.

Keep reading this post . . .

What Does It Mean to Be a Neocon? A Reply to My Critics



Text  



On Monday, I published a short column arguing that despite the Iraq debacle, I continue to identify as a “neocon,” and I offered thoughts as to why. The column has met with an almost uniformly negative response. In “Why No One Should Still Be a Neocon,” Daniel Larison, an articulate proponent of strategic independendence who writes for The American Conservative, describes the neocon impulse as “the impulse to interfere where we aren’t welcome, to dictate to those that reject our advice, and to try to control what is beyond our grasp.” Daniel Drezner regrets that I hadn’t read his recent (gated) International Security article, “Military Primacy Doesn’t Pay.” Joshua Keating suggests that my definition of a neocon is overly broad, and he finds it odd that I don’t address current foreign policy controversies involving Syria, Russia and Ukraine, and Iran’s nuclear program. And though there have been a number of other replies, many of which I’m sure I haven’t seen, the most helpful, from my perspective, was that of Tom Scocca of Gawker, who asked one of his colleagues, Reuben Fischer-Baum, to contrast U.S. expenditures against those of a bundle of other countries (Saudi Arabia, India, Brazil, Russia, Italy, U.K., France, China, Germany, and Japan) not just in defense, but also in health and education. I’ve borrowed the chart below, with thanks to Scocca and Fischer-Baum for having taken the trouble to prepare it. Scocca writes that I am “a sloppy thinker who is deeply confused about history and how the world works,” and he concludes that my “incoherence and ridiculousness” represents a kind of public service in that it demonstrates the extent to which neoconservative thinking has been discredited. You’d almost get the sense that Scocca isn’t a fan of mine.

So who am I calling a neocon? I’m calling a lot of people neocons, including people who’d never dream of applying the label to themselves. My column was admittedly somewhat idiosyncratic, as my intention was to reframe the discussion about neocons. Rather than engage with a caricature (“neocons are people who believe in waging war as the first option”), I sought to identify what I take to be a serious, if contentious, view (“now is not the right time for large-scale strategic retrenchment”). I tried to address, briefly, the notion that U.S. defense expenditures are unreasonably high, a subject to which I’ll return. And I also wanted to highlight that while the Iraq experience has taught us about the limits of military power, we shouldn’t forget that there was a time when U.S. policymakers were so dismissive of humanitarian considerations that they aided and abetted in a humanitarian disaster through their malign neglect. Linking these ideas together was a tricky undertaking, and my column was decidedly imperfect. I am, however, glad to have sparked an interesting discussion that has surfaced a lot of resentments, misconceptions, and prejudices, my own very much included.

First, I should note that definitions of “neocon” vary. As Keating argues, I was using a broad, and perhaps overly broad, definition:

Why do I still believe that the U.S. should maintain an overwhelming military edge over all potential rivals, and that we as a country ought to be willing to use our military power in defense of our ideals as well as our interests narrowly defined? There are two reasons: The first is that American strength is the linchpin of a peaceful, economically integrating world; and the second is that we know what it looks like when America embraces amoral realpolitik, and it’s not pretty.

That is, I identify neoconservatism with the belief that U.S. military primacy and U.S. global leadership are valuable and worth sustaining, and also that we ought to define our interests broadly rather than narrowly.

Keep reading this post . . .

Obamacare’s Enrollees Look Pretty Sickly



Text  



A new study of the first two months of health-insurance coverage by plans on Obamacare’s exchanges finds that they seem to be quite unhealthy, judging by the prescription drugs for which they’re filing claims. The analysis by ExpressScripts, reported on by Kaiser Health News, found the following:

The new enrollees are more likely to use expensive specialty drugs to treat conditions like HIV/AIDS and hepatitis C than those with job-based insurance.

The sample of claims data — considered a preliminary look at whether new enrollees are sicker-than-average  - also found that prescriptions for treating pain, seizures and depression are also proportionally higher in exchange plans, according to Express Scripts, one of the nation’s largest pharmacy benefit management companies.

The numerical gap is relatively small: 1.1 percent of drugs claims in the exchange plans were for “specialty drugs,” versus 0.75 in other commercial health plans. That may not sound like much, but specialty drugs account for a quarter of what America spends on pharmaceuticals every year, ExpressScipts says, which means it’s a big expense, even before we consider the other treatment costs involved for conditions that require the specialty drugs. 

This more or less makes sense: The Affordable Care Act has hugely raised the (pre-subsidy) cost of insurance on the individual market and made it much more attractive for very sick people than very healthy people (as in, much more so than any health-insurance market already is). 

Keep reading this post . . .

Some Thoughts on Segregation, Discrimination, and Family and Community Ties



Text  



Understanding segregation is essential to understanding the opportunity gaps that have such a profound effect on American life. A growing body of work, which we often discuss in this space, finds that social networks are the main conduits through which people acquire information about economic and educational opportunities, and through which cultural and social capital is transmitted. When we ban formal discrimination in the labor market or in housing, we don’t necessarily break patterns of in-group favoritism that tend to reinforce the dominant position of the dominant social group.

If we want to address opportunity gaps, we have to play a short game and a long game. The short game involves meeting the immediate needs of the poor and the marginalized. For example, how can we raise the quality of educational options available to children living in neighborhoods with high poverty concentrations, and how can our institutions better serve the interests of low-wage workers? The long game is about integration and inclusion.

In-group favoritism may well be a durable fact of life, but the definition of the in-group can change and grow more expansive over time. Yet this process works in complicated ways. The boundaries of the in-group are determined by the perceptions and the decisions of millions of individuals and families — e.g., who is and is not an appropriate choice for a neighbor, friend, or spouse? Perceptions of who is and is not an insider can change at a glacial pace, or they can move like a cascade. This process isn’t always or even often susceptible to policy intervention; indeed, policy interventions in this space often yield surprising, and sometimes quite perverse, consequences. Policy choices can, however, have an effect over very long periods of time. Laws and norms are not one and the same, but they do influence each other. And if the tendency towards assortative mating proves stronger than (say) our tendency to associate with people from similar cultural backgrounds, investing in the human capital of members of marginalized groups will help them gain access to the mainstream.

In the United States, the expansion of the dominant social group is most vividly illustrated by the evolution of who is and is not considered white, a subject that forms the basis of the academic subfield of “critical whiteness studies.” The banal point is that there was a time when various European-origin communities were considered separate and distinct from, and inferior to, a dominant Anglo-Protestant ethnocultural group, yet these communities came to be included in a broader category of American whites, due in part to high rates of intermarriage, the assimilation of Anglo-Protestant norms by immigrants and their descendants, and to some degree the embrace of cultural practices introduced by the relative newcomers.

Some argue that we are seeing a similar process as Americans of Latin American and Asian origin are incorporated into whiteness broadly (very broadly) understood, though I see this as an oversimplication. Ethnic attrition, in which members of minority groups cease to see themselves as members of these groups, is a real phenomenon, yet it is an uneven phenomenon. It is not relatively rare for people with only one Mexican-origin grandparent to identify as Latino while it is relatively rare for people with three Mexican-origin grandparents not to do so. Intermarriage rates, which contribute to ethnic attrition, are much higher for immigrants and second-generation Americans with levels of educational attainment that match or exceed the average for native-born Americans as a whole. And there is tentative evidence that some national-origin groups are more likely to disaffiliate themselves from Latino identity than others. It is also worth noting that some Asian American subgroups are less educated and affluent than others, and this also has bearing on the integration process across groups.

The story for African Americans is notably different from that of Latinos and Asian Americans. More than 8 percent of African Americans are foreign-born, and the second-generation share of the black population is larger still. It is still true, however, that a large majority of black Americans are the descendants of enslaved Africans brought to what is now the United States, which is to say this population is deeply-rooted in American life, yet it has been excluded from the dominant social group for a very long time. Integrating African Americans into the dominant social group is a profound challenge — some see it as an insurmountable challenge, particularly if we believe that the expansion of the dominant group through the incorporation of successive waves of immigrants is actually tied to the continued exclusion of blacks. Another way of approaching the issue is that the dominant social group is not best defined in racial terms, as there are marginalized whites as well as marginalized blacks, and that decades of upward mobility have meant that many African Americans are members of privileged yet diverse social networks. The problem is that this process hasn’t gone far enough.

One of the best books I’ve read in ages is Patrick Sharkey’s Stuck in Place, which I’ve referenced on several occasions over the past year. Sharkey, a sociologist at New York University who is very much a man of the left, emphasizes the importance of the intergenerational transmission of social outcomes. He argues that to understand the persistence of black poverty is closely tied to the fact that the adverse outcomes associated with poverty are reinforced from generation to generation. His most striking finding is that children raised in nonpoor neighborhoods raised by parents raised in poor neighborhoods fare roughly as well as children raised in poor neighborhoods raised by parents raised in nonpoor on cognitive tests, and that both groups of children far better than those raised in poor neighborhoods by parents raised in poor neighborhoods and far worse than children raised in nonpoor neighborhoods by parents raised in nonpoor neighborhoods.

Is is thus profoundly significant that African Americans are more likely to live in poor neighborhoods than non-blacks at every income level. That is, middle-income blacks and far more likely to live in poor neighborhoods than middle-income whites.

As Harvard economist Edward Glaeser and Duke economist Jacob Vigdor have observed, racial segregation declined in the first decade of this century. They note that while in 1960, half of all black Americans lived in neighborhoods with an African-American share above 80 percent, the same was true of 20 percent of blacks as of 2010. (I’d be curious to see what would happen if we separated foreign-born and second-generation African-Americans from the rest of the black population.) They also found that segregation declines sharply as levels of educational attainment rise among African Americans, but of course this is fully compatible with Sharkey’s analysis.

So why would middle-income and even upper-middle-income black Americans prefer to live in high-poverty neighborhoods. In a recent discussion of Sharkey’s work, Jamelle Bouie of Slate offered a partial hypothesis:

Simply put, the persistence of poor neighborhoods is a fact of life for the large majority of blacks; it’s been transmitted from one generation to the next, and shows little sign of changing. All of which raises an obvious question: Why do blacks have a hard time leaving impoverished neighborhoods?

“When white families advance in economic status,” writes Sharkey, “they are able to translate this economic advantage into spatial advantage by buying into communities that provide quality schools and healthy environments for children.” The same isn’t true for black Americans, and some of the answer has to include present and ongoing housing discrimination. For example, in one study—conducted by the Department of Housing and the Urban Institute—black renters learned about fewer rental units and fewer homes than their white counterparts.

Once you grasp the staggering differences between black and white neighborhoods, it becomes much easier to explain a whole realm of phenomena. Take the achievement gap between middle-class black students and their white peers. It’s easy to look at this and jump to cultural explanations—that this is a function of black culture and not income or wealth. But, when we say middle-class black kids are more likely to live in poor neighborhoods, what we’re also saying is that they’re less likely to have social networks with professionals, and more likely to be exposed to violence and crime. [Emphasis added]

While I don’t disagree that discrimination might play a role, and perhaps even a significant role, I’m not sure that this is the most important part of the story. Two thoughts immediately come to mind. The first is that there is some work, from Patrick Bayer, Hanming Fang, and Robert McMillan, that residential segregation might actually increase as income differences across groups grow more equal:

This paper introduces a mechanism that, contrary to standard reasoning, may lead segregation in U.S. cities to increase as racial inequality narrows. Specifically, when the proportion of highly educated blacks rises holding white education fixed, new middle-class black neighborhoods can emerge, and these are attractive to blacks, resulting in increases in segregation as households re-sort. To examine the importance of this ‘neighborhood formation’ mechanism in practice, we propose a new two-part research design that yields distinctive cross-sectional and time-series predictions. In cross section, if our mechanism is important, inequality and segregation should be negatively related for older blacks, as we find using both the 1990 and 2000 Censuses. In time series, a negative relationship should also be apparent, particularly for older blacks. Controlling for white education, we show that increased black educational attainment in a city between 1990 and 2000 leads to a signicant rise in segregation, especially for older blacks, and to a marked increase in the number of middle-class black communities, consistent with neighborhood formation. Of broader relevance, our findings point to a negative feedback loop likely to inhibit reductions in segregation and racial inequality over time.

(I intend to write more about Bayer et al.)

You would think that highly educated blacks would be less likely to face housing discrimination, yet it seems that a not inconsiderable number of black Americans prefer living in heavily-black communities when other options are available to them. As the number of affluent black households in a metropolitan area increases, it is possible to live in a black neighborhood that is not disproportionately poor, and so living in an integrated or heavily-nonblack neighborhood is not the only way to avoid some of the downsides of living in a poor neighborhood. This phenomenon has bearing on the diversity of social networks, which has bearing on the workings of in-group favoritism. Yet it is not obvious to me that this is a tendency that we should necessarily seek to counteract through policy intervention. Might at least some middle-income blacks who live in poor neighborhoods actually be choosing to do so? And if this will damage the educational and labor market outcomes of their children, should we have a public education campaign encouraging them not to do so? Once we introduce the possibility that housing discrimination is not the whole story, things suddenly get a lot more interesting and difficult.

And finally, consider one possible reason middle-income African Americans might prefer to live in heavily-black neighborhoods even if the neighborhoods in question are quite poor. Upward social mobility can be more difficult to achieve in some cultural communities in others, e.g., communities in which it is expected that successful family members will provide assistance to less-successful family members will also tend to be communities in which it is harder for successful individuals to accumulate assets. Sharing what you have can make it hard to save. At the same time, those who achieve upward mobility by severing social ties to loved ones might find that when they experience a crisis, whether economic or interpersonal, etc., they don’t have people who will be willing to lend them a hand. Different families manage this tradeoff in different ways, and it seems plausible that at least some black Americans are choosing to remain rooted in black communities not because of housing discrimination, but because they are mindful of the importance of maintaining strong family and community ties in the face of uncertainty. This story resonates with me, as I have many loved ones who’ve been plagued by mental illness.

Is the Market Putting an End to the Pay Gap?



Text  



President Obama is making a push this week for more legislation and executive action to close the pay gap that exists between full-time men employed in the United States and full-time women. The statistic Democrats usually employ, that women earn 77 cents on the dollar men do, is no more than a measurement of that arbitrary ratio.

While it works as a ploy to get people to pay attention, spending a whole few days using it to sell new discrimination legislation is going to draw you into some problems — as White House economist Betsey Stevenson found out yesterday and press secretary Jay Carney found out today (almost offensively retorting to a reporter that he expected something “more precise” from Reuters when it’s the White House that’s being so imprecise).

One of the best pieces of evidence of how arbitrary that ratio: It’s much smaller for younger women. Here’s the gap as measured by Pew last year:

As you can see, the overall wage gap is 1) narrower than the White House claims (the 77-cent stat uses full-time workers only, which doesn’t take into account that some full-timers work more hours than others) and 2) is narrowing over time — because younger women are earning a lot more relative to men their age than older women are. 

Keep reading this post . . .

What Is Bitcoin and What Is It Good For?



Text  



In the most interesting piece I’ve seen so far on Ezra Klein’s new site, Vox, Tim Lee explains quite clearly why Bitcoin could be a very big deal — but not quite for the reason you may have heard. It’s as simple as this:

Bitcoin’s detractors are making the same mistake as many Bitcoin fans: thinking about Bitcoin as a new kind of currency. That obscures what makes Bitcoin potentially revolutionary: it’s the world’s first completely open financial network.

History suggests that open platforms like Bitcoin often become fertile soil for innovation. Think about the internet. It didn’t seem like a very practical technology in the 1980s. But it was an open platform that anyone could build on, and in the long run it proved to be really useful. . . .

The Bitcoin network serves the same purpose as mainstream payment networks such as Visa or Western Union. But there’s an important difference. The Visa and Western Union networks are owned and operated by for-profit companies. If you want to build a business based on one of those networks, you have to get permission from the owner.

And that’s not always easy. To use the Visa network, for example, you have to comply with hundreds of pages of regulations. The Visa network also has high fees, and there are some things Visa won’t let you do on its network at all.

Bitcoin is different. Because no one owns or controls the network, there are no limits on how people can use it. Some people have used that freedom to do illegal things like buying drugs or gambling online. But it also means there’s a low barrier to entry for building new Bitcoin-based financial services.

There’s an obvious parallel to the internet. Before the internet became mainstream, the leading online services were commercial networks like Compuserve and Prodigy. The companies that ran the network decided what services would be available on them.

In contrast, the internet was designed for anyone to create new services. Tim Berners-Lee didn’t need to ask anyone’s permission to create the world wide web. He simply wrote the first web browser and web server and posted them online for others to download. Soon thousands of people were using the software and the web was born.

I’m actually somewhat surprised to see the praise Lee’s piece has gotten — not because it isn’t good, but because it seemed relatively obvious. Bitcoin isn’t about to replace the U.S. dollar, but has already begun to be used in a variety of interesting and innovative ways, including, as he mentions, affordable international money transfers.

Why the confusion? In part, maybe, the media (or maybe just the media I, and Vox writers, follow) likes exciting political stories more than it does dense technology ones, and the political narrative about Bitcoin became overly ambitious. The idea of its becoming a functional private currency plays into the libertarian dream of free banking — replacing a government-issued currency with private currencies. Bitcoin is a very long way from being able to immanetize minarchism. The idea of an open financial network, which certain technological innovations of Bitcoin make much easier, is a good bit more feasible, but less thrilling politically. Yet one of the problems facing a product like Bitcoin is drawing in enough users to make it useful, and perhaps some of the political controversy and/or idealism surrounding the currency has helped create the network effects necessary to get it off the ground.

So in terms of Bitcoin’s future, it’s more useful to think of it as an open-source and liberalized alternative to traditional financial providers and payment systems than an upending of our entire financial system. A deregulated payment system will create problems all its own, for sure, but it and imitators will be able to unlock tremendous opportunities, as investor Marc Andreesen argued in a useful though denser explication in the Times in January. (His firm isn’t exposed to Bitcoin’s value per se but has invested almost $50 million in Bitcoin-related start-ups.)

One last note: Lee’s piece is an interesting example of what Vox can be expected to do — the lucid explanation requires taking a side, and deciding what arguments about something are more or less worth dismissing in order to get the useful information across. It’s about more than “providing context” or “explaining the news,” but it’s going to be useful nonetheless.

How to Rescue New York’s Economy



Text  



Paul Howard of the Manhattan Institute offers a sobering analysis of the problems plaguing New York state’s economy — problems rooted, in his telling, in the state’s dysfunctional politics. New York state spends more than any other state on its Medicaid program. Yet health outcomes in New York are mediocre, and the state has the worst rate of avoidable hospital admissions in the country. Howard notes that while New York lawmakers have relied on the financial services sector to shoulder the burden of financing the growth in medical expenditures, the non-profit health sector now employs more workers (570,000 as of 2010) than financial services (487,000). One reason the non-profit health sector has grown so robustly is that it is exempt from New York’s corporate and property taxes. Wealth is being transferred from a highly-competitive, knowledge-intensive tradable sector (financial services) to a non-tradable sector plagued by inefficiency and a lack of competition, in which political connections are key to getting ahead. The result is a slow-motion fiscal crisis as the health sector crowds out other public investments and as a rising tax burden dampens growth potential. 

To address New York’s political economy dilemma, Howard calls for a two-fold strategy: first, the state ought to encourage a shift towards consumer-directed health care; and second, it ought to provide consumers and employers with better access to reliable data on health care outcomes. Howard reports that New York state is inching in the direction of liberating data with its new public-private partnership, the state-wide Health Information Network, SHIN-NY:

The easy political sell for the SHIN-NY is that it will improve safety (if you’re unconscious, and can’t relay critical medical information, providers can still access it), reduce duplicative services (allowing providers to access an X-ray or other clinical services provided at a different hospital), and save money.

But I think this just the tip of the iceberg. Liberating clinical information (and merging it with other data) will allow investors and entrepreneurs to build new tools for measuring outcomes and costs, empowering patients with information that makes them into true consumers. There are real privacy challenges to navigate here, but they’re manageable – starting with ensuring that patients own and control their own health data and can always control who sees it, and when.

Howard believes that SHIN-NY could lead to a more competitive and innovative health sector, as entrepreneurs make use of the new data to build new business models. And the really good news for New York state lawmakers is that these disruptive new enterprises would be taxable for-profits, unlike the non-profit health care behemoths that are contributing to the deterioration of New York state’s tax base.

The Downsides of a One-Size-Fits-All Minimum Wage



Text  



You might have heard that Connecticut recently raised its minimum wage, which is now scheduled to reach $10.10 in 2017. This happens to be the exact same minimum wage the Obama administration has proposed for the United States in 2016, so Connecticut’s proposal isn’t quite as bold as it seems. If President Obama succeeds in passing his federal minimum wage increase, Connecticut’s new increase will be moot. But it is also worth noting that Connecticut has one of the highest median household incomes ($69,519) in the United States ($53,046). By way of comparison, Connecticut’s median household income is 78.8 percent higher than that of Mississippi ($38,882).    

As we’ve discussed, one of the most problematic aspects of the effort to raise the national minimum wage is that the median hourly wage varies considerably across different U.S. communities, a policy that might work in Connecticut might not work for Mississippi. If we decided to set the minimum wage at 50 percent of the median hourly wage in a given region, we’d see local minimums ranging from $15.72 in high-productivity metros like Silicon Valley to $8.93 in low-productivity metros like the Orlando area. Will the impact of a $10.10 federal minimum wage be identical in both regions? I doubt it. 

In a related vein, Andrew Biggs and Mark Perry warn that imposing a uniform minimum wage across the United States will hurt poor regions more than rich regions. They offer the case of Pueblo, Colorado, a low-cost, low-wage labor market. If employers in Pueblo were subject to a much higher federal minimum wage, they wouldn’t simply be able to raise prices, as their customers wouldn’t be in a position to pay higher prices, unlike customers in more affluent regions. The result is that employers in Pueblo and cities and towns like it would have little choice but to economize on labor costs by substituting capital for labor, demanding more intense work effort from existing employees, and cutting back on hiring.

One wonders what will happen to the would-be workers priced out of the labor market as a result. They won’t be able to find work in the formal sector in high-cost, high-wage labor markets. Instead, these women and men will be forced to languish on the sidelines, dependent on some combination of public assistance and private charity. That or we will see a robust increase in the size of the underground economy, as we’ve seen in other countries with onerous labor market regulations. What might this do to trust in government, or to our ability to protect vulnerable workers? 

 

Pages

Sign up for free NRO e-mails today:

Subscribe to National Review