Google+

The Agenda

NRO’s domestic-policy blog, by Reihan Salam.

Innocents and Skeptics



Text  



Rick Perlstein’s The Invisible Bridge, a sweeping account of the American political scene from Richard Nixon’s 1972 reelection to the presidential campaign of 1976, when Ronald Reagan emerged as the Republican heir apparent, has occasioned two excellent reviews.

The first, by Geoffrey Kabaservice in The National Interest, author of Rule and Ruin, an account of the transformation of the GOP from an ideologically diverse to an ideologically unified party, surprised me. Having scathingly criticized the conservative movement that came to dominate the Republican Party, I had expected Kabaservice to sympathize with Perlstein’s jaundiced take on charlatanism of the modern Republican right. Instead, Kabaservice teases apart what he sees as Perlstein’s too-neat division of American society into those who had grown suspicious of patriotic shibboleths in the wake of Watergate, with whom the author very clearly identifies, and the innocents who clung bitterly to the idea of America as “God’s chosen nation,” and who came to resent liberal critics of mainstream American mores. This neat framework overlooks the “complicated and somewhat contradictory views” held by conservatives and liberals alike, almost all of whom are “both innocents and skeptics in various measures.” According to Kabaservice, “today’s conservatives are simultaneously critics and boosters of America, fearful of its big government and deeply suspicious of its politics and culture while in the same breath maintaining that it is still the envy of the world.” To single them out as uniquely ingenuous is to fail to do them justice, and to give their political rivals more credit than they deserve.

And having closely studied Ronald Reagan’s rise for Rule and Ruin, Kabaservice offers a more complicated portrait of a pragmatic politician who emerged as an unlikely conservative folk hero:

Perlstein fails to grapple with what made Reagan a successful conservative politician in a liberal state, who would use his broad appeal first to come close to toppling Ford in 1976 and then to win the presidency outright in 1980. Perlstein equates Reagan’s early 1960s conservatism with the paranoia of the John Birch Society, but makes little effort to figure out why Reagan was able to campaign as a big-tent Republican or govern as a pragmatist. Perlstein claims that Reagan’s goal was to purify the GOP by kicking out all who did not subscribe to rigid conservative principles, when in fact Reagan opposed this sort of ideological cleansing. Reagan told California’s conservative activists in 1967 that they had an obligation “not to further divide but to lead the way to unity. It is not your duty, responsibility or privilege to tear down or attempt to destroy others in the tent.” He warned that “a narrow sectarian party” would soon disappear “in a blaze of glorious defeat.” The conservatives would have booed anyone else off the stage for offering this diagnosis, but they obeyed Reagan.

It’s still a mystery why a governor who passed the largest tax increase in his state’s history, signed the nation’s most liberal abortion bill and no-fault divorce law, and supported gun control and pioneering environmental legislation could have remained a hero to the conservative movement. It would never happen nowadays, but Reagan somehow threaded the needle. It’s not enough to say, as Perlstein does, that Reagan was merely opportunistic or sought to blame his actions on the liberals in the California legislature, who were “furtive and diabolical in ways unsullied innocents could not comprehend.”

Indeed, Kabaservice reminds his readers that many conservatives only backed Reagan out of a sense of resignation, as their hearts belonged to harder-edged conservatives who, unlike Reagan, were more unyielding in their convictions. Kabaservice also reminds us that whether or not you embrace his critique of the conservative movement, he understands it deeply. 

The second review, by Christopher Caldwell in Bookforum, is interesting throughout, and far more sympathetic to Perlstein than Kabaservice, particularly in its admiring conclusion. Caldwell’s discussion of Reagan as “a protean personality” is as intelligent as you’d expect. Yet I was particularly intrigued by Caldwell’s provocative, and convincing, counterinterpretation of the Watergate scandal. Caldwell argues that while Perlstein believes Nixon to have been both dangerous and ruthless, Perlstein’s narrative offers evidence for another interpretation entirely: “Nixon lost his job because people feared him less than they did his adversaries.” Nixon’s various abuses of power were if anything far surpassed by those of his Democratic predecessors, from whom Nixon and his allies learned a great deal. ”If impeachment was warranted because Nixon was corrupt,” Caldwell writes, ”it was actually carried out because he was weak and trusting and his party upstanding,” as various Republicans refused to close ranks behind him. And so Caldwell suggests that Watergate is perhaps best seen “as a kind of conspiracy or coup.” One hopes that Caldwell will at some point revisit this idea at greater length. 

Common Core Validation Committee Member: ‘Nobody Thought There Was Sufficient Evidence’ for the Standards



Text  



It’s all too common: The backers of a broad-based political movement claim their cause is steeped in evidence, but a perusal of the research reveals more hope than substance. The Common Core education standards are a good example. As I noted last week, George Washington University’s compendium of 60+ research papers on Common Core included just two focused on the standards’ impact on student achievement, and the results were mixed at best.

The people who developed and validated the Common Core have themselves acknowledged its weak evidence base. That’s clear from an article in the November 2013 issue of the American Journal of Education. Written by two UC Santa Barbara professors, Lorraine M. McDonnell and M. Stephen Weatherford, the article features anonymous interviews with Common Core’s leading designers. The article’s purpose is academic — to analyze whether Common Core’s development fits the social-science model for how research affects policy — but it contains a lot of practical information that should inform the ongoing standards debate.

McDonnell and Weatherford are clear that research evidence did play a role in Common Core’s development, but almost all of the evidence was used either to identify problems (such as America’s poor ranking on international tests) or to generate hypotheses (for example, that higher achieving countries have superior standards). When it came time to actually write the standards, the developers could not draw from a large store of empirical evidence on what works and what doesn’t. They had little to go on except the standards of high-performing nations and the “professional judgment” of various stakeholders.

McDonnell and Weatherford give the example of learning trajectories in mathematics. While developmental psychologists have studied how sequencing affects math learning in early childhood, much less is known about learning trajectories in later years. So the standards writers asked for the “best judgments” of people who study math education. Regarding the frequent use of expert judgment in lieu of data, one Common Core developer told the authors, “We wanted to be able to cite non-peer-reviewed research because there’s not enough research available, and often the findings are inconclusive.”

Another developer said that Common Core is, scientifically, merely a work in progress: “If we waited for the perfect research to inform the development of the standards, we would never have the standards today. . . . As we move deeper and deeper into implementation . . . further research will inform future iterations of the standards.”

After the drafting stage, the validation committee also recognized that the standards were informed by intuition as much as real research. According to one committee member:

It was pretty clear from the start that nobody thought there was sufficient evidence for any of the standards. . . . The review process, in short, was inclusive and involved feedback from a lot of different perspectives. This is not ‘sufficient research evidence,’ but it is thoughtful professional judgment, applied systematically. [ellipsis in original]

The Common Core developers were warned by some researchers that the link between standards and achievement was tenuous, and that other reforms (“enabling conditions”) would be necessary to see real progress. But, in the words of McDonnell and Weatherford:

Common Core advocates understood what researchers were telling them about enabling conditions. However, during this stage of the policy process, they chose to downplay them because they would complicate the agenda at a time when a policy window was opening but might not be open for long.

None of this should be taken as evidence of a conspiracy. Common Core proponents believe in their cause and have understandably sought to portray it in the best possible light. And it’s not implausible that the standards could raise student achievement, assuming that strict accountability measures can force public schools to improve. But the truth is that we know little about the connection between standards and achievement, and it will be difficult to justify standards-based reform without knowing more. 

ADVERTISEMENT

What If GDP Growth Remains Stubbornly Low?



Text  



Robert J. Gordon offers yet another pessimistic assessment of America’s future growth prospects in his latest NBER working paper. While the CBO projects that U.S. GDP will grow at an average annual rate of 2.2 percent over the next decade, Gordon estimates that the economy will instead grow at a rate of 1.6 percent:

Forecasts for the two or three years after mid-2014 have converged on growth rates of real GDP in the range of 3.0 to 3.5 percent, a major stepwise increase from realized growth of 2.1 percent between mid-2009 and mid-2014. However, these forecasts are based on the demand for goods and services. Less attention has been paid to how the accelerated growth of real GDP will be supplied. Will the unemployment rate, which has declined at roughly one percent per year, decline even faster from 6.1 percent in June, 2014 to 3.0 percent or below in 2017? Will the supply-side support for the demand-side optimism be provided instead by a major rebound of productivity growth from the average of 1.2 percent over the past decade and 0.6 percent for the last four years, or perhaps by a reversal of the minus 0.8 percent growth rate since 2007 of the labor-force participation rate?

The paper develops a new and surprisingly simple method of calculating the growth rate of potential GDP over the next decade and concludes that projections of potential output growth for the same decade in the most recent reports of the Congressional Budget Office (CBO) are much too optimistic. If the projections in this paper are close to the mark, the level of potential GDP in 2024 will be almost 10 percent below the CBO’s current forecast. Further, the new potential GDP series implies that the debt/GDP ratio in 2024 will be closer to 87 percent than the CBO’s current forecast of 78 percent.

This paper also has profound implications for the Federal Reserve. The unemployment rate has declined rapidly, particularly within the last year. Faster real GDP growth will accelerate the decline in the unemployment rate and soon reduce it beyond any estimate of the constant-inflation NAIRU, even if productivity growth experiences a rebound and the labor force participation rate stabilizes. The macro economy is on a collision course between demand-side optimism and supply-side pessimism.

Three issues immediately come to mind. 

The first is that Gordon’s pessimistic thesis rests in part on a plausible account of the rate of improvement in labor quality. For much of the twentieth century, the educational attainment of the U.S. workforce steadily increased. It now appears to have plateaued, for a variety of reasons. Gordon’s case is strengthened by the fact that less-skilled immigration appears to have lowered the average skill level of the U.S. workforce. As Mary Alice McCarthy has observed, “one in six U.S. adults lack basic literacy and numeracy skills, compared with one in twenty in Japan.” The presence of less-skilled immigrants in the workforce could allow other workers to invest more in their human capital, by facilitating the outsourcing of household production. But it is not at all clear that this effect fully outweighs the direct impact of less-skilled immigration on average labor quality, and the indirect impact that flows from the fact that the children of less-skilled immigrants tend to have low rates of educational attainment relative to the U.S. average. One obvious way to address concerns about labor quality would be to embrace an immigration policy that increases rather than decreases the average skill level of the U.S. workforce. (Alternatively, one might conclude that the dampening effect of less-skilled immigration on growth in GDP per capita is not reason enough to oppose it as it is fully compatible with growth in “income per natural.”)   

The second is that, as Scott Winship argues in “The Affluent Economy,” even sluggish growth from a high base implies substantial absolute gains. 

The third is the question of “digital dark matter,” which Shane Greenstein and Frank Nagle discuss in a new paper:

Researchers have long hypothesized that research outputs from government, university, and private company R&D contribute to economic growth, but these contributions may be difficult to measure when they take a non-pecuniary form. The growth of networking devices and the Internet in the 1990s and 2000s magnified these challenges, as illustrated by the deployment of the descendent of the NCSA HTTPd server, otherwise known as Apache. This study asks whether this experience could produce measurement issues in standard productivity analysis, specifically, omission and attribution issues, and, if so, whether the magnitude is large enough to matter. The study develops and analyzes a novel data set consisting of a 1% sample of all outward-facing web servers used in the United States. We find that use of Apache potentially accounts for a mismeasurement of somewhere between $2 billion and $12 billion, which equates to between 1.3% and 8.7% of the stock of prepackaged software in private fixed investment in the United States and a very high rate of return to the original federal investment in the Internet. We argue that these findings point to a large potential undercounting of the rate of return from IT spillovers from the invention of the Internet. The findings also suggest a large potential undercounting of “digital dark matter” in general.

Greenstein has also cited Wikipedia, where U.S. households spend more than half of 1 percent of their time online, as another “visible piece of digital dark matter.” Elsewhere, Joel Mokyr offers a closely related argument:

[E]conomists are trained to look at aggregate statistics like GDP per capita and measure for things like “factor productivity.” These measures were designed for a steel-and-wheat economy, not one in which information and data are the most dynamic sectors. They mismeasure the contributions of innovation to the economy.

Many new goods and services are expensive to design, but once they work, they can be copied at very low or zero cost. That means they tend to contribute little to measured output even if their impact on consumer welfare is very large. Economic assessment based on aggregates such as gross domestic product will become increasingly misleading, as innovation accelerates. Dealing with altogether new goods and services was not what these numbers were designed for, despite heroic efforts by Bureau of Labor Statistics statisticians.

The aggregate statistics miss most of what is interesting. Here is one example: If telecommuting or driverless cars were to cut the average time Americans spend commuting in half, it would not show up in the national income accounts—but it would make millions of Americans substantially better off. Technology is not our enemy. It is our best hope. If you think rapid technological change is undesirable, try secular stagnation.

Mokyr’s point is well taken. It could be that Gordon’s pessimistic assessment of U.S. growth potential rests on a failure to fully account for the value created by various new technologies. Yet part of the reason economists are trained to look at aggregate statistics is that these statistics serve as a guide for policymakers. By referencing the fact that cutting the time Americans spend commuting in half would have no impact on GDP, Mokyr is highlighting the limitations of GDP — clearly cutting commutes in half would make Americans better off. Yet unless shorter commutes lead Americans to work commensurately longer hours, shorter commutes will not lead to an increase in taxable income that can then be used to finance, say, military expenditures or redistribution. 

And this is important to understand. Many new technologies make us better off in ways that can’t be taxed away. Consider some of the recent research on “overearning,” which has been ably summarized by Dave Nussbaum:

Are people working too much? A psychological researcher trying to determine the answer would first need to ask, how would we even know? If someone works a lot, she may have perfectly good reasons. Perhaps she wants to save enough money to retire comfortably, to have a cash cushion for emergencies, or to have money to pass on to heirs. To answer the question, [Christopher] Hsee and his colleagues tried to strip away all complicating factors in order to study the underlying psychology.

Doing this is “an advantage rather than a defect or compromise,” says Hsee. His rationale is that if you remove the reasons people have to earn more money and yet still find that they overearn, “that shows the overearning is real.”

Nussbaum goes on to describe the experiment, which is quite clever. Let’s say productive workers across the country decided to stop ”overearning” — what would happen to tax revenues? Nothing good, I suspect. I’m reminded of Eli Dourado’s contrast between technologies of control and technologies of resistance. As governments grow more intrusive, more effort is directed away from the “brute maximization of production” and towards “producing things that we already know how to produce in ways that have ancillary benefits,” like evading control. Or, in a related vein, a person might decide that (say) the interaction of means-tested subsidies and taxes is such that its best to cultivate a preference for non-pecuniary goods, which are arguably more abundant than ever, as Tyler Cowen maintains.

So a few things could be true at the same time: Robert J. Gordon could be right that U.S. GDP growth potential will be quite low for the foreseeable future; growth in GDP per capita could be understating the extent to which quality of life is improving; the federal government’s fiscal capacity could be constrained by sluggish GDP growth, which in turn might limit future increases in redistribution (per Winship, redistribution could still increase in absolute terms, but the scope will be limited all the same); and the key to life might be learning to love things that are free or cheap. When we talk about the importance of education as a policy matter, we generally focus on how educational attainment allows people to become better producers; but perhaps we ought to think more about how it might allow people to become “better” consumers (see Michael Schrage  Who Do You Want Your Customers to Become? for more on a related idea). Eventually, we might come to believe that the chief injury caused by a lackluster education is not so much the way it reduces earning potential but rather the way it limits our ability to appreciate free or cheap stuff, like writing. This sounds rather self-serving, doesn’t it? I’m not totally sure I buy it myself.

Scandinavia’s ‘Right-to-Work’ Unionism



Text  



Though I often disagree with Justin Fox, I’m a fan of his writing. And so I was surprised by his recent discussion of Jake Rosenfeld’s new lament for organized labor’s decline, What Unions No Longer Do. I have yet to read Rosenfeld’s book, and it’s possible that there is a great deal that’s been lost in the translation from the book to Fox’s discussion of it. Just to be clear, I’m reacting to Fox’s brief remarks and not to the book itself. 

The decline of unions in the U.S. has often been painted as inevitable, or at least necessary for American businesses to remain internationally competitive. There are definitely industries where this account seems accurate. Globally, though, the link between unionization and competitiveness is actually pretty tenuous. The most heavily unionized countries in the developed world — Denmark, Finland, and Sweden, where more than 65% of the population belongs to unions — also perennially score high on global competiveness rankings. The U.S. does, too. But France, where only 7.9% of workers now belong to unions (yes, France is less unionized than the U.S.), is a perennial competitiveness laggard.

This is weak tea. While it is true that France is less unionized than the U.S., it is also true, as Richard Yeselson observes in his conversation with Jonathan Cohn in the The New Republic, that “France actually has smaller percentage of union members than the US, but union contracts cover almost the entire workforce.” Given that the critique of unions tends to center on the rigidities associated with union contracts, Fox’s example does not suit his purpose. 

And as for Denmark, Finland, and Sweden, where union density hovers around 70 percent (67.6 percent in Denmark in 2010, 69 percent in Finland in 2011, and 67.7 percent in Sweden as of 2013, the latest numbers available via the OECD for each country), it is important to understand all of them are Ghent system countries. That is, all of them are countries in which the unemployment insurance system is administered by labor unions. In “Paths to Power,” Michael Dimick describes the Ghent system’s distinctive properties, and he argues that ”that collectively-bargained unemployment insurance is efficient and establishes a positive-sum tradeoff between a form of labor-market security for workers and a flexible workplace for employers.” I don’t share Dimick’s enthusiasm for organized labor, but he does an excellent job of explaining the deficiencies of the U.S. approach to labor unions. One of his central points is that labor laws in Denmark and Sweden (he doesn’t specifically address Finland) don’t offer particularly strong protections to unions:

How much better are labor laws in Denmark and Sweden? Both countries have had union densities in the 70 and 80 percents in recent years, much higher than in the US either currently or historically. Given their social-democratic history and politics, one might suppose that these Nordic countries would have untrammeled union-security provisions, effective representation procedures, a strictly-enforced duty to bargain, and high levels of job security, in addition to an elaborate, overarching legal framework for regulating employment relations. In fact, on balance neither Danish nor Swedish labor law is significantly more protective of unions or workers than current labor law in the US. First, in both Denmark and Sweden, union-security agreements are virtually nonexistent. As strange as it sounds, they are essentially “right-to-work” countries.

Dimick describes how the Ghent system approaches resolve some of the problems that U.S. labor law sets out to solve:

First, the Ghent system provides an alternative solution to the free-rider problem. Since it is voluntary and administered by unions, it gives workers an incentive to keep and maintain membership without the need for union-security agreements.

By “union-security agreements,” Dimick means the closed shop, in which union membership is a condition of employment. 

Second, the Ghent system addresses the recognition problem by separating and reprioritizing employees’ decision to join a union from the employer’s decision to recognize it. Danish and Swedish unions are able to constantly recruit new members through their administration of unemployment insurance, and thence mobilize and build membership support among employees for other labor-movement goals, including recognition from employers, without the need for government certification procedures. These two decisions are confounded and their ordering reversed in US labor law and practice: in order for unions to recruit and build membership support, they must first prevail in a government-administered representation election against a typically intransigent employer. Finally, union participation in unemployment-insurance policy also helps sustain more cooperative labor relations. In Denmark in particular, unions and employers are able to achieve a positive-sum tradeoff whereby workers receive income security in exchange for ceding their demands for job security, which gives employers more flexibility in the workplace. Danish success with the policy—termed ―”flexicurity”—has garnered much attention from European policy makers.

Dimick explains the upshot of these differences for productivity and competitiveness, the subject Fox briefly addresses in his post: 

Even defenders of unions usually concede that unions have negative effects on productivity or unemployment, or both. Indeed, I show that when unions and employers bargain over wages and employment- protection rules (such as “just cause”), risk-averse workers will prefer a contract with excessive job security that is production inefficient. However, when unions and employers bargain alternatively over wages and unemployment benefits, leaving to the employer the right to hire and fire, this externality is internalized and the resulting contract causes no loss in productive efficiency compared to the competitive, nonunion benchmark. Understanding these mechanisms can help explain the supportive role that Danish flexicurity plays in solving the adversarial problem. Moreover, as I shall argue, there are good reasons why unions should participate in unemployment-insurance policy in order to make the implementation of flexicurity a success. [Emphasis added]

Dimick concludes by arguing that the U.S. ought to embrace something like the Ghent system, on a state-by-state basis. I haven’t thought very deeply about the idea, but what I can say is that attributing the virtues of Ghent system unions to U.S. unions makes little sense; they are  profoundly different, despite the fact that we call them the same thing

Immigration and the Persistence of Social Status



Text  



Though the title of Gregory Clark’s new Foreign Affairs essay (“The American Dream Is an Illusion“) is regrettable — my guess is that it was written by an editor hostile to Clark’s argument — the essay itself is compelling and important. Those who are familiar with Clark’s The Son Also Rises and A Farewell to Alms will quickly grasp the premise. Drawing on a wide range of data sources, Clark has chronicled the pace of social mobility over centuries across a number of different countries, and his central finding is that “social mobility rates are extremely low” and that “seven to ten generations are required before the descendants of high and low status families achieve average status,” both in egalitarian countries like Sweden and in more laissez-faire countries like the United States. 

In his new essay, Clark applies this insight to immigration policy. Specifically, he posits that the apparent success of immigrant assimilation in earlier eras largely reflects the fact that “immigrants who quickly assimilated to their new society in countries such as the United States were often positively selected from the sending populations.” The poor immigrants who made their way to the U.S. from Scandinavia and central and eastern Europe were generally literate women and men well-equipped for life in a modernizing society. Of course, not all immigrants fell into this category. Clark discusses Americans of French origin, including those descended from French settlers in Louisiana and from more recent French Canadian immigrants. While Irish and Italian Catholic immigrants faced more intense discrimination than people of French origin, he notes that their descendants have achieved more or less average social status while people of French origin have not. According to Clark, this reflects the fact that French who arrived in the United States “were overwhelmingly drawn from the lower classes of Acadia and Quebec, as a result of demographic patterns and selective migration,” and “the effects of this lower social status have persisted across generations,” despite intermarriage. 

And Clark maintains the same pattern is recapitulating itself among more recent immigrants. Visa restrictions helped ensure that immigrants from some regions (sub-Saharan Africa, the Arab world, South Asia, and East Asia) had skills that were of value in the U.S., and this effectively limited immigration to people who were from groups with above average social status in their native countries. Immigrants who did not face these restrictions, because they arrived as refugees or as unauthorized immigrants, “entered the United States with low social status and have struggled to achieve upward mobility since.” A similar pattern obtains in Europe, where the descendants of guest workers drawn from rural populations have found it difficult to climb the economic ladder. 

One of the most persistent myths surrounding the immigration debate is that if the U.S. placed a heavier emphasis on skills in shaping its immigration policy, the share of immigrants from Latin America would plummet. (Mark Krikorian offers a version of this thesis in a recent article for NRO.) Clark illustrates why this isn’t necessarily the case. Clark observes that “migrants from Mexico and Central America tend to be negatively selected from their home populations: they are often the people who found themselves in such desperate economic circumstances at home that they preferred to live as illegal immigrants in the United States,” which helps explain why the social status of descendants of migrants from Latin America tends to be low. 

But a skills-based immigration policy would create more opportunities for skilled Latin American immigrants who have something to lose, and who would not be willing to live as unauthorized immigrants. Consider a recent Pew Global Attitudes Project poll of Mexicans, which found that 34 percent of Mexicans would move to the U.S. if given the opportunity, and half of them (17 percent) would do so without authorization. It seems reasonable to bet that the 17 percent who would not do so without authorization are drawn from Mexico’s more educated classes. Mexico’s educational attainment rate is low by the standards of affluent market democracies, yet it is increasing: while only 12 percent of Mexican 55-64 year-olds have a post-secondary education, 22 percent of 25-34 year-olds have one. There is a fairly large pool of educated Mexican immigrants to draw from, should the U.S. choose to do so. 

Clark, however, offers a different strategy: he calls for increasing the immigration of educated Latin Americans from countries like Argentina, Brazil, Chile, and Peru, as doing so would “bolster the overall social status of the Latino population in future generations, and their representation in higher-status positions in the society.” While this seems like a perfectly sound idea, it’s not clear to me that it would help the U.S. forestall the emergence of “a substantially poorer and less educated Latino underclass,” particularly if, as seems likely, the descendants of skilled immigrants are more likely to intermarry than the descendants of less-skilled immigrants, a phenomenon that reflects the larger rise of assortative mating (in which people choose partners with similar levels of educational attainment) and that contributes to ethnic attrition (in which people cease identifying with a given ethnic group, usually because they are of mixed ancestry and their connection to the group has attenuated). Indeed, a mestizo underclass might come to see itself as racially distinct from Latinos descended from middle- and upper-class social groups, a phenomenon that is arguably already taking hold. 

As for immigrants of Asian origin, it is important to note that the channels for skilled immigration Clark identifies are not the only channels that Asian immigrants use. Many Asian immigrants arrive via family unification visas, and there is a large number that has arrived via the diversity visa lottery. As a general rule, the relatives of skilled Asian immigrants will also tend to be skilled, but this isn’t always or necessarily the case. It is not uncommon for a capable immigrant to invite less-capable relatives to join her in her adopted country. There might in fact be considerable social pressure for her to do so. 

And as David Nakamura of the Washington Post reports, the Obama administration is contemplating an executive action that could dramatically increase legal immigration, despite the fact that large majorities of Americans consistently oppose such an increase:

The proposal outside groups are pushing centers on changing the way the government counts the number of foreigners who are granted green cards, which allow them to live and work in the United States. Under the law, 226,000 green cards are reserved for family reunification and 140,000 for employment in specialized fields, numbers that Congress established in 1990.

The government has traditionally counted each family member against the limit when granting visas to foreign siblings of U.S. citizens. The spouses and children of permanent U.S. residents and foreign workers have counted against the limits as well. Advocates are calling on Obama to count only the principal green-card holder in each case, while allowing the rest of the family members in, which would reduce huge backlogs in both categories.

More than 4.4 million people are waiting for green cards, according to the State Department.

Asian American advocacy organizations have focused on such changes because, other than Mexico, the countries with the longest waiting lists of people trying to join relatives in the United States are the Philippines, India, Vietnam and China — with delays stretching as long as two decades.

This executive action would have a profound effect on the future composition of the U.S. population, and the future composition of the Asian origin population. It likely means that the Asian American population, which now has a median household income higher than that for the U.S. as a whole (partly because this population is concentrated in high-wage, high-cost regions), would grow poorer and less capable of self-support. 

ADVERTISEMENT

How Many Public Employees Can We Afford?



Text  



In an interview with The New Republic’s Jonathan Cohn, Richard Yeselson, a veteran of the labor movement and well-regarded policy intellectual, offers a mostly sanguine take on the role of public sector unions in American society. Public sector unions outnumber members of private sector unions by a considerable margin, and there is a good reason they’ve become such a lightning rod. Suffice it to say, I don’t agree with Yeselson’s interpretation, but he does offer one observation that I’d like to unpack:

[T]he strongest critique I hear from the right (and some centrist Democrats too) about public sector unions is that their first priority needs to be the excellent provision of services, rather than the job security of public sector workers. And you know what? I agree! Who isn’t in favor of excellent provision of public services? But given how little revenue we raise and how little we spend on public services, relative to other countries, it is easy to imagine public policies that would boost public sector services and end up creating more employees, too.

First, let me stipulate that there are indeed many conservatives for whom the chief problem with public sector unions is that they favor increasing public employment levels. It is not clear to me, however, that this is in fact their main drawback, nor do I think the strongest conservative arguments against public sector unions center on their role in increasing public employment as such. 

In “Government Crowded Out,” Daniel DiSalvo of the Manhattan Institute warns that as state and local governments face rising pension and health benefit costs, they’ve been forced to cut services. And as costs per worker rise, state and local governments are less inclined to maintain high public employment levels. For example, DiSalvo observes that in New York city, a sanitation worker costs $144,000, up from $79,000 a decade ago. Had compensation costs not risen to this extent, the city might be more amenable to expanding the ranks of sanitation workers. Much has been written about the peculiarities of the defined-benefit pensions that are common in the public sector, and which many public employees would happily trade for higher wages, and so I won’t rehearse the matter here. 

Public sector unions have considerable influence over the work rules governing what employees can and can’t do. To be sure, this influence has at times been overstated, as Rick Hess argues in Cage-Busting Leadership, his book on how bureaucratic inertia can be more of an obstacle to reform than work rules as such. But work rules can effectively determine staffing levels in a given job function. In doing so, they limit the ability of state and local governments to embrace new productivity-enhancing technologies and to adapt to changing consumer preferences. In “The Trouble with Public Sector Unions,” DiSalvo discusses how rigid work rules shape the culture of the public sector:

Yet as skilled as the unions may be in drawing on taxpayer dollars, many observers argue that their greater influence is felt in the quality of the government services taxpayers receive in return. In his book The Warping of Government Work, Harvard public-policy scholar John Donahue explains how public-employee unions have reduced government efficiency and responsiveness. With poor prospects in the ultra-competitive private sector, government work is increasingly desirable for those with limited skills; at the opposite end of the spectrum, the wage compression imposed by unions and civil-service rules makes government employment less attractive to those whose abilities are in high demand. Consequently, there is a “brain drain” at the top end of the government work force, as many of the country’s most talented people opt for jobs in the private sector where they can be richly rewarded for their skills (and avoid the intricate work rules, and glacial advancement through big bureaucracies, that are part and parcel of government work). Thus, as New York University professor Paul Light argues, government employment “caters more to the security-craver than the risk-taker.” And because government employs more of the former and fewer of the latter, it is less flexible, less responsive, and less innovative. It is also more expensive: Northeastern University economist Barry Bluestone has shown that, between 2000 and 2008, the price of state and local public services has increased by 41% nationally, compared with 27% for private services.

Yeselson finds it easy to imagine policies that improve the quality of public services while also increasing public employment. I’d suggest that improving the quality of public services might entail increasing public employment in some domains while decreasing it in others. It is this reallocation that public sector unions make extremely difficult, as unions, as democratic organizations, reflect the risk-version of their median members, who believe, correctly for the most part, that unionized public employment helps them secure more favorable terms than they’d be able to secure from private employers. It is not obvious that New York City gains much from the fact that the MTA employs 25 workers for tunnel-boring machine work that Spanish transit agencies need only 9 workers to accomplish. But I don’t doubt that the 16 workers who’d be made redundant by a shift to Spanish-style labor practices would not welcome the change. In aggregate, however, I too can imagine easily a scenario in which rolling back rigid work rules leads to stable or even increased employment levels – New York City could stand to employ more police and more sanitation workers, if they were more affordable. The trouble with public sector unions is not so much that they increase public employment levels as that they make it hard for state and local officials to meet the evolving needs of dynamic communities. 

UPDATE: As if on cue, Kate Zernike of the New York Times reports on encouraging developments in Camden, New Jersey, a depressed, crime-ravaged community where a notoriously ineffective local police force was recently disbanded and replaced by a police force operating under the auspices of the county government:

Dispensing with expensive work rules, the new force hired more officers within the same budget — 411, up from about 250. It hired civilians to use crime-fighting technology it had never had the staff for. And it has tightened alliances with federal agencies to remove one of the largest drug rings from city streets.

The entire article is well worth reading. The old police force was fiercely resistant to change. The new police force is now unionized, yet the sweeping away of the old order has left fresh memories that have prevented backsliding thus far. It remains to be seen if the new police force, and its union, will grow just as hidebound as its predecessor. What we can say is that Camden’s police have grown more efficient by “dispensing with expensive work rules,” which in turn has allowed for the expansion of the workforce. 

The New CBO Report: Medicare Really Is Looking Better, But Not Good Enough



Text  



The Congressional Budget Office released their update to the Budget and Economic Outlook for the next decade Wednesday. Damian Paletta has a summary for the Wall Street Journal, but here are are three big takeaways for thinking about policy choices in the coming years:

1. Deficits are returning to normal levels, but not for long.

According to the CBO, this year’s deficit will fall to 2.9 percent of GDP – smaller than the historical average – and the deficit will shrink again in 2015, too. This reflects the natural fall in spending and increase in revenues one can expect coming out of a major recession, and when considering that mandatory spending (for entitlements like Social Security and Medicare) is up 4 percent this year to $79 billion, the fact that the immediate budget is improving is impressive (though some of the cuts we’re seeing, to the Pentagon, are controversial).

The problem, of course, doesn’t lie in 2014 or 2015, but further down the road when Social Security and health-insurance programs drive our deficits and debt to unsustainable levels. This CBO report shows that climb is set to begin in the next decade, with deficits rising from 2.9 percent of GDP to 4 percent of GDP in 2024, with 85 percent of the increase in outlays coming from Social Security, Medicare and the medical programs, and interest payments on the debt.

2. The CBO projects the labor market to recover, but has its doubts about long-run GDP growth. Obamacare has something to do with it.

One of the most important debates within the Fed and among policy wonks concerns the trajectory of the labor market: Does the combination of demographic shifts, long-term unemployment’s scarring effects, and stagnant wages mean that for the foreseeable future we’re guaranteed lower levels of employment? Or is the low labor-force-participation rate and elevated unemployment rate the result of “slack” in the economy that could be addressed with better economic performance and further monetary stimulus?

In some ways, the CBO takes the latter view — citing a labor-force-participation rate below what demographics would project and a large number of part-time workers who would prefer full-time hours – and expects that faster economic growth will reduce the slack and push the unemployment rate down to 5.7 percent by 2016. But that’s partly because it doesn’t expect the labor-force-participation rate to recover: Increased demand for labor on one side will be outweighed by the continued aging of the population and Obamacare’s work disincentives, resulting in the rate’s dropping another half a percentage point between now and 2017.

In terms of economic growth, the CBO expects this year’s growth number, hampered by a rough first quarter, to be a paltry 1.5 percent. Activity will pick up in 2015 and 2016, returning to over 3 percent growth. But then through 2024, growth subsides to 2.2 percent, an anemic pace well below our historical average. That lower growth rate is in large part due to a smaller labor force, caused by aging and changing work incentives.

3. Medicare’s in better shape for now, but still in trouble down the road.

The CBO projects that Medicare spending will rise from 2.9 percent of GDP to 3.2 percent in 2024. To put this in perspective, the following graph from the Upshot shows revisions the CBO has made in their estimates for projected growth in Medicare spending: They’ve been dropping noticeably and consistently. Medicare is in better fiscal shape than we used to think, at least over the next ten years. (Click through to the piece to see the graphic animated.)

This is unquestionably good news, but it’s important to remember two things” First, to those cheerleading the projections as evidence of the success of Obamacare in bending the cost curve, the data would seem to say that the spending reductions are driven mainly by global trends toward slower health-cost growth – likely a result of the economic slowdown – and, as the NYT’s Margot Sanger-Katz and Kevin Quealy argue, naturally occurring “technical changes” in the health-care profession, e.g., more generic drugs.

Second, the economy will pick back up, inspiring more health spending and the aging of America will continue to age putting more pressure on the program. As Jim Capretta notices, the long-term picture is still bleak: “The program has $28.1 trillion in unfunded liabilities over the next 75 years. Together with Social Security’s $13.3 trillion shortfall, it is clear the federal government has accumulated entitlement spending commitments that far exceed our capacity to pay for them.”

Capretta maintains that the fee-for-service nature of Medicare causes inefficiency and excess spending and that its price controls distort the market while not doing anything to limit the use of services. 

Today’s Policy Agenda: Should Tax Dollars Go to Training Doctors?



Text  



Should tax dollars go to training doctors?

At the Upshot, prominent health-care academic Uwe E. Reinhardt argues that, as an economic matter, new doctors shouldn’t be trained at the expense of taxpayers. The rational for publicly funded medical degrees is that medical training, or individuals with medical training rather, are a public good — an economic concept referring to something everyone can use, without impeding on others’ ability to use it.

But as Reinhardt points out, since medical professionals decide if, when, and where they work, they aren’t equally accessible to everyone:

Medical education and training represents human capital that is fully owned by the trainees. They can deploy it as they wish — on patient care, or even in the financial markets, where quite a few physicians now work. In principle, therefore, the owners of that valuable, purely private human capital should pay themselves for its production.

But according to the American Association of Medical Colleges, the U.S. is already facing a serious doctor shortage that will only get worse:


Even taking the industry’s claims with a few grains of salt, the constrained supply of doctors does suggest that their training needs some form of public support. 

Millions will see their Obamacare subsidies reduced automatically. 

The Associated Press reports that millions of Americans who received subsidies to buy health insurance will receive a smaller amount than they planned on. Since subsidies are linked to income, if individuals make more money in 2014 than they anticipated, their subsidy will be automatically reduced. Subsidies are given as tax credits, so some individuals’ tax refunds will be less than they planned on, but for others, they may end up with an actual tax liability. “More than a third of tax credit recipients will owe some money back, and (that) can lead to some pretty hefty repayment liabilities,” one tax expert told the AP. 

Most affected individuals don’t know they will be receiving less at tax return time. There is a process to report unexpected income and to avoid owing extra at the end of the year, but few consumers have used it because it’s complicated and requires month-to-month income accounting. 

American corporations actually do pay their fair share.

In Forbes, Yevgeniy Feyman argues that, contrary to what President Obama has implied, inversion, when a company moves its headquarters to another country because of differences in tax systems, is not unpatriotic – it just makes sense:

“Restructuring to reduce tax burdens is no more than unpatriotic than corporations incorporating in Delaware to take advantage of the simple incorporation process. More importantly, the idea that companies aren’tt ’paying their fair share’ – at least relative to companies in other countries – is equally disingenuous,” he writes.

The U.S.’s high corporate tax rate and attempts to tax global income of U.S.-based firms, unlike most countries, means that more and more companies are making the move abroad:

Feyman concludes that U.S. companies’ efforts to relocate, a process that is both expensive and difficult, is an indication that something is very wrong with the American taxation system, not the firms themselves.   

Today’s Policy Agenda, August 26: Study Says Globalization Is Harming U.S. Employment



Text  



Work itself is crucial for happiness.

For Arthur Brooks’ new site The Pursuit of Happiness, Andy Quinn examines the literature on unemployment and happiness:

Cristobal Young, a Stanford sociologist, has studied the non-pecuniary effects — that is, effects that aren’t purely financial — that unemployment insurance has on the lives of recipients. Specifically, Young tracked the self-reported happiness (“subjective well-being”) of different groups of people caught up in different economic circumstances. What he found is seriously surprising.

In this graph, Young has calculated the happiness of impacts of losing your job and receiving no unemployment insurance (on the left) and losing your job but receiving the benefit (on the right):

The similarity is remarkable. To hear progressive commentators tell it, job loss in the absence of unemployment insurance is like stepping straight into Hell, and the benefits do a tremendous amount to lift up hard-luck Americans. But here, we see a different story: Unemployment benefits merely take a little bit of the edge off the happiness downdraft from being laid off. To be sure, the financial help cuts back on some stress at the margins. But just as clearly, involuntary idleness brings a massive psychological cost that mere money can hardly touch.

The policy implications of this study stretch far beyond the recent debate over extending emergency unemployment insurance. After all, even if one is convinced by this study of the overwhelming importance of employment, it’s possible that by keeping recipients actively working for jobs, the emergency UI benefits were helping, not hindering, the return to work.

The broader lesson from the study is that orienting social policy towards employment – through work requirements, wage subsidies, and the litany of proposals circulating the Right in recent months – is far from cold-hearted. In fact, given the well-documented benefits of employment ranging from staying healthy to building strong relationships, a push towards work if protected by a safety net, is in the best interests of Americans.

Study: Globalization does slow American employment growth.

Several economists, including David Autor, have a new NBER working paper out investigating the relationship between imports from China and employment:

BL Even before the Great Recession, U.S. employment growth was unimpressive. Between 2000 and 2007, the economy gave back the considerable gains in employment rates it had achieved during the 1990s, with major contractions in manufacturing employment being a prime contributor to the slump. The U.S. employment “sag” of the 2000s is widely recognized but poorly understood. . . . We find that the increase in U.S. imports from China, which accelerated after 2000, was a major force behind recent reductions in U.S. manufacturing employment and that, through input-output linkages and other general equilibrium effects, it appears to have significantly suppressed overall U.S. job growth. BL

While globalization and capitalism have radically reduced global inequality and made the world much better off as a whole, not everyone in the United States benefits uniformly from such transactions. This study suggests that workers in the tradable sector – jobs that can be moved around the world — face job losses and likely wage cuts or stagnation in the new economy. This is not to say that we need to restrict trade or engage in protectionism to insulate these workers from new economic realities — too many lives are being radically improved across the globe. It’s just important to remember that absent any intervention, a sizable portion of the American population (particularly in, say, North Carolina,
Michigan, and Alabama
) are going to struggle in this new reality, and we ought to bear it in mind policy-wise.

For the Storyline, Howard Schneider explains this trend from a personal point of view.

Is immigration why Scott Brown now has a race on his hands?

With a new poll from the University of New Hampshire showing Scott Brown pulling into a statistical tie after trailing significantly most of the race, could it be that the elevation of immigration in the national conversation has helped him?

From TV ads to op-eds, Brown has driven home his opposition to “comprehensive immigration reform.” We’ve talked recently about how elite consensus on that issue isn’t really in line with public opinion, and the fact that Brown is getting traction in a northeastern swing state by taking a strong stance on the against it is certainly interesting.

In a recent issue of National Review, Reihan and Yuval Levin outlined an alternative to the big-business driven proposals that have been circulating of late — they argue their alternative is superior on the political and policy merits.

The Politics of Respectability and the Future of the Democratic Coalition



Text  



Successful political parties are successful for only so long. As a coalition grows more expansive and diverse, it also grows more fractious. This raises the risk that some important segment of the coalition might defect and, in a political system dominated by two major parties, join the opposing team. The rise of Barack Obama was supposed to have cemented the Democratic Party’s majority status, yet as Sean Trende, author of The Lost Majority, and others have argued, dominant parties have never been as dominant as advertised in modern U.S. political history, and today’s Democrats and Republicans both suffer from vulnerabilities that make it unlikely that one will marginalize the other for any meaningful length of time.

But given recent Democratic successes in presidential elections — recall that George W. Bush won the presidency fairly narrowly in 2000 and 2004, and that he lost the popular vote in 2000 — the notion that Democrats really have struck on a successful political formula enjoys wide acceptance. The coalition of African Americans, Latinos, Asian Americans, and white college-educated liberal professionals that Ruy Teixeira and John Judis identified as the heart of the “emerging Democratic majority” does indeed represent a growing share of the electorate, a fact that has caused no small anxiety for Republican political strategists, many of whom have concluded that embracing comprehensive immigration reform or social liberalism is essential to future GOP success.

In June, Ross Douthat of the New York Times reminded his readers that the Democratic coalition is more vulnerable than it appears, briefly observing that “the liberal coalition’s extraordinary diversity also offers many potential lines of fracture.”

One of these lines of fracture has been growing more pronounced in recent weeks. In the ongoing conversation over the shooting death of 18-year-old Michael Brown at the hands of a local police officer in Ferguson, Missouri, there has been much talk of “respectability politics.” Some number of prominent African Americans have used Brown’s death, and the attention it has drawn to the use of force by police against young black men, as an opportunity to discuss some of the social maladies that are particularly prevalent in black communities, and which disproportionately impact the life prospects of young black men.

Byron York, columnist for the Washington Examiner, has just written a dispatch from Michael Brown’s funeral, where the erstwhile presidential candidate, activist, and television news personality Al Sharpton delivered the eulogy. In classic form, Sharpton started off his eulogy by condemning “the police, the government, and the American system, concluding that they all combined to end a promising 18-year-old life.” Yet Sharpton then addressed a different set of concerns:

After a demand for broad reforms in American policing, Sharpton changed course to address his black listeners directly. “We’ve got to be straight up in our community, too,” he said. “We have to be outraged at a 9-year-old girl killed in Chicago. We have got to be outraged by our disrespect for each other, our disregard for each other, our killing and shooting and running around gun-toting each other, so that they’re justified in trying to come at us because some of us act like the definition of blackness is how low you can go.”

“Blackness has never been about being a gangster or a thug,” Sharpton continued. “Blackness was, no matter how low we was pushed down, we rose up anyhow.”

Sharpton went on to describe blacks working to overcome discrimination, to build black colleges, to establish black churches, to succeed in life.

“We never surrendered,” Sharpton said. “We never gave up. And now we get to the 21st century, we get to where we’ve got some positions of power. And you decide it ain’t black no more to be successful. Now, you want to be a n—– and call your woman a ‘ho.’ You’ve lost where you’re coming from.”

The cameras cut to director Spike Lee, on his feet applauding enthusiastically. So were Martin Luther King III, radio host Tom Joyner, and, judging by video coverage, pretty much everyone else in the church. They kept applauding when Sharpton accused some blacks of having “ghetto pity parties.” And they applauded more when Sharpton finally declared: “We’ve got to clean up our community so we can clean up the United States of America!”

Not every observer was pleased by Sharpton’s address, of course. Some were appalled by the implication that Brown’s funeral should prompt a discussion of black personal responsibility, as York reports. Elsewhere, Julia Ioffe of The New Republic discusses the debate over the politics of respectability among African Americans:

It was a sentiment I heard again and again in Ferguson: Yes, the largely white police force acted egregiously. Yes, the system—in segregated St. Louis more than in most cities—is stacked against them. But there’s something rotten inside the black community, too. “I feel like the race needs to get the infection out of itself,” Dellena, the owner of the 911 Hair Salon, a block away from the burned-out QT, told me. “People are not educated. You need to think, what is the image that you’re giving off? You need to have all your business together if you know you’re ten times more likely to get pulled over.” Or as Mark L. Rose, a late-middle-age black man I met at a protest, put it, “When the cops see these boys walking around with their pants down, of course they have no respect for them.”

This self-criticism—or self-flagellation—is nothing new. It’s the return of a phenomenon that is referred to by African-American historians as the “politics of respectability.” “During times of unrest, black writers going back to the early 20th century have argued that the reason blacks are facing discrimination or police brutality is because they have not been acting properly in public—particularly young, poor people,” says Michael Dawson, a political scientist and director of the Center for the Study of Race, Politics, and Culture at the University of Chicago. “In the last 20 years, it’s been a criticism of baggy pants, rap music, hair styles. Back in my generation, it was Afros. I remember my grandparents telling me, ‘you should cut your hair.’”

Respectability, in essence, is about policing the behavior in your community to make sure people are behaving “properly,” so as to not attract unwelcome attention from whites—“with ‘properly’ being a normatively white middle class presentation,” says Dawson. In feminist discourse, a similar phenomenon among women is described as internalizing the patriarchal gaze. That is, women see themselves as the men in charge want to see them—feminine, sexy, pliant—and then behave and dress accordingly. Respectability is the same thing, but with blacks internalizing the white gaze.

Suffice it to say, Ioffe disapproves of this “self-flagellation.” And I don’t doubt that many younger liberals, including many younger African-American liberals, feel as she does. One wonders if Al Sharpton has lost the plot in his old age, and if other voices, who forcefully reject the politics of respectability, will soon come to the fore.

Josh Barro, writing for The Upshot, raises the intriguing possibility that at some point, a Democratic political entrepreneur will run a national campaign that “gives[s] voice to the anger we’re seeing in Ferguson.” Though Barro acknowledges the politic logic of downplaying sweeping critiques of the racism of the criminal justice system at the national level, as the African American electorate is monolithically Democratic while non-black voters who are skeptical of these critiques are not, he suggests that this neglect might soon come to an end:

[I]f the Tea Party has taught us anything, it’s that a base can force its party to take stances that won’t be popular in a general election. Black voters, and other Democratic voters who care about issues of policing and racial justice, don’t have to flex their political muscle by being willing to leave the party. If these issues are of importance to much of the electorate — and this month’s protests suggest they are — then a politician should be able to build a credible Democratic primary campaign by focusing on them.

I suspect that Barro is right, and that we will see a Democratic presidential campaign in the 2016 or 2020 primaries that offers a racially-infused critique of the American criminal justice system, which will look quite different from calls for criminal justice reform from social conservatives and libertarians.

Note, however, that not all African Americans will welcome this critique. Indeed, there may well be overlap between those who embrace the politics of respectability and those who are wary of an overtly racialized conversation about criminal justice reform. The now-famous Pew survey which found “stark racial divisions” [http://www.people-press.org/2014/08/18/stark-racial-divisions-in-reactio... in reaction to Michael Brown’s death reveals, yes, that blacks and whites have reacted differently. It also reveals that 18 percent of blacks agree with 47 percent of whites that “race is getting more attention than it deserves” while 80 percent of blacks agree with 37 percent of whites that “this case raises important issues about race.”

It is important not to extrapolate wildly from the existence of this contrarian slice of the African-American population. But one wonders if these voters might at some point be open to voting for a Republican Party that talks about criminal justice system more sensitively and intelligently without fully embracing a racialized critique and, most importantly, that places a much heavier emphasis on middle-class economic interests.

The Principle of Infrangibility and the White-on-White Murder Rate



Text  



Back in 1999, the Harvard sociologist Orlando Patterson made the case for what he called “the principle of infrangibility”:

Some problems, of course, are characteristic of certain groups, the result of their peculiar history, socioeconomic environment and cultural adaptation to life in this country. This is as true of urban Afro-Americans as it is of rural Anglo-Americans in Appalachia or Asians. Thus we might ask why mass murders seem exclusively the doing of young white men who often come from the middle class.

What is at issue here is the principle of infrangibility: our conception of normalcy and of what groups constitute our social body — those from whom we cannot be separated without losing our identity, so that their achievements become our own and their pathologies our failures.

We should speak not simply of black poverty but of the nation’s poverty; not the Italian-American Mafia problem but the nation’s organized crime problem; not the pathologies of privileged white teen-age boys but … of all our unloved, alienated young men.

When we compare, say, the white murder rate to the black murder rate as opposed to the black murder rate to the U.S. murder rate, the latter of which is a total that factors in the black population, we risk creating the impression that we are dealing with two separable populations.

Yet there are times when it can be useful to leave aside the principle of infrangibility, particularly when we recognize that we are not in fact dealing with separable populations, but rather to demonstrate the vulnerability of a particular population.

This brings to mind Matt Yglesias’s recent discussion of the white-on-white murder rate in the U.S., an effort to shed light on what some are calling the “fallacy” of talking about black-on-black crime. Yglesias warns that “white-on-white murder in America is out of control,” and to make his point he compares it to white-on-white homicide rates in a number of other countries:

This is not to say that white people are inherently prone to violence. Most whites, obviously, manage to get through life without murdering anyone. And there are many countries full of white people — Norway, Iceland, France, Denmark, New Zealand, and the United Kingdom — where white people murder each other at a much lower rate than you see here in the United States. On the other hand, although people often see criminal behavior as a symptom of poverty, the quantity of murder committed by white people specifically in the United States casts some doubt on this. Per capita GDP is considerably higher here than in France — and the white population in America is considerably richer than the national average — and yet we have more white murderers.

While one can debate what it means for a country to be “full of white people,” it is worth noting that the white share of New Zealand’s population (74 percent) is lower than that of the United States (77.7 percent), and the non-white populations of France and Britain are quite high. Moreover, non-white individuals in these countries are, like non-white individuals in the U.S., more likely to be killed than whites. It is not clear to me how Yglesias calculated the white-on-white murder rate in these societies, but I’m happy to accept that all of them have a lower white-on-white murder rate than the United States.

But if we instead compare the rate of intentional homicides of these countries to the rate for the white population of the U.S., the white U.S. does not in fact look like a dramatic outlier. (I want to stress that I could be getting something wrong here, so please let me know if I’ve gone astray and I will revise accordingly.)

According to statistics gathered by the UN Office on Drugs and Crime, the 2011 intentional homicide rates per 100,000 for the countries identified by Matt are as follows: Norway (2), Iceland (1), France (1), Denmark (1), New Zealand (1), and the UK (1). The rate for the U.S. as a whole is 5. As of 2011, there were 3,172 white murder victims in the U.S., according to the FBI. The white population as a whole is 245.5 million, including whites who identify as Latinos. This yields an intentional homicide rate of 1.29, a number almost indistinguishable from those of Iceland, France, Denmark, New Zealand, and the UK and lower than the intentional homicide rate of Norway, Canada (2), Belgium (2), Israel (2), and Finland (2). In contrast, there were 2,695 black murder victims in 2011 against a 2013 black population of 41.7 million, which yields an intentional homicide rate of approximately 6.5., a rate higher than that of Kenya (6) but lower than that of Lithuania (7).

[Well, I'm glad that I stressed that I could be getting something wrong here. I used Vox's link to the FBI's single victim/single offender murder statistics to make these calculations, and I was wrong to have done so for the purposes of constructing a synthetic intentional homicide rate for U.S. whites and blacks. A more complete picture of murders finds that there were 5,825 white murder victims in the U.S. and 6,329 black murder victims. The white U.S. population thus had an intentional homicide rate of 2.37. This is considerably higher than the 2011 Canadian murder rate of 1.73, which the UN source I cited earlier rounds up to 2. The black U.S. population, meanwhile, has a far higher intentional homicide rate when we don't limit ourselves to single victim/single offender murders: it is 15.18. This is shockingly high by the standards of the affluent market democracies -- it lies between Ecuador (15) and Guyana (16).]

Yglesias expresses deep concern in his post about white violence, and I don’t begrudge him that, as his concern is obviously sincere. (It is worth noting that the number of white offenders is substantially lower than the number of white murder victims, but we’ll leave that aside for the moment.) When viewed in comparative context, however, it is not obvious that white Americans are unusually violence-prone. What I find remarkable is that despite the widespread availability of firearms in the U.S., and despite a culture that is in many respects more violent than those of our European counterparts, white U.S. population appears to have had, in 2011 at least, a murder rate comparable to that of Norway and Canada. Yet it would be senseless to take comfort in this fact for the reason that Orlando Patterson identifies in his column: we shouldn’t focus on the white homicide rate so much as we should focus on the national homicide rate, which is alarmingly high, and not just by the standards of affluent market democracies. And the national homicide rate is extremely high in at least in part because we have failed to police predominantly black neighborhoods effectively.

The Evidence Behind Common Core Is Really Weak



Text  



The Common Core education standards are a massive effort intended to raise educational standards across the country. Untold hours and dollars have already been spent on their implementation, which is still proceeding in more than 40 states even as a few have dropped out. But what is the evidence that the new standards will improve learning?

As I noted a year ago, simple correlations of test scores with standards across states or nations are not definitive, given all of the intervening variables involved in those comparisons.

Now the Center for Education Policy at George Washington University has put together a compendium summarizing over 60 research papers related to Common Core design and implementation. If there is empirical evidence on the importance of strong standards, this is probably the place to find it. Unfortunately, only two papers in the entire compendium are devoted to measuring the impact of Common Core on test scores. Both papers employ the dubious correlation-across-states methodology, and both give mixed results at best.

The first paper, by two Michigan State professors, examines the relationship between states’ math scores in 2009 and the similarity of their math standards (pre–Common Core) to the Common Core math standards. The authors initially find no correlation in the 50-state universe. They are able to detect a positive relationship only with an ex post division of states into two separate groups, with the smaller group consisting of 13 states with low scores despite strong standards. The authors acknowledge that “these analyses should be viewed only as exploratory in nature, merely suggesting the possibility of a relationship.”

The second paper, published by Brookings, follows up on the Michigan State analysis. It finds that states’ test score gains between 2009 and 2012 show no relationship to the similarity of their standards to Common Core. There was no positive correlation even when using the favorable groupings from the Michigan State paper. The one encouraging finding in the Brookings paper is that states with stronger implementation of Common Core seem to show greater gains. But the author warns that, even if the correlation is genuine, the effect size is tiny.

And that’s it.

Much like the push for government preschool, the Common Core movement is suffused with much hope but little evidence. That’s clear from how the standards were developed in the first place. As an important article from last November’s American Journal of Education points out, most of the research evidence behind Common Core focuses on identifying problems — America’s poor international ranking, achievement gaps, high school graduates without basic skills, etc. But when it came to writing standards to address those problems, the Common Core developers had little to go on except the standards of high-performing nations and the “professional judgment” of various stakeholders.

So although the rise of national standards is one of the most significant education policy changes in a generation, and despite the passion of proponents, the data can tell us very little about Common Core’s future impact.

Of course, this isn’t usually the rationale articulated against Common Core — parents’ groups and anti-ed-reform groups have put forth more specific criticisms of the standards and the related testing regimes. But Common Core definitely is ailing: A new poll commissioned by Education Next finds that support for the standards has been slipping nationwide.

No, One Program Did Not Reduce Colorado’s Teen Pregnancy Rate by 40 Percent



Text  



This month, a study was released finding that a Colorado state-government program to provide free contraception of all kinds for low-income women had reduced teen pregnancies and abortions in the state by an incredible amount. The Washington Post reported that the program was “how Colorado’s teen birthrate dropped 40% in four years.” It turns out, though, that while those indicators are improving dramatically in Colorado, it’s hard to credit the program in question and a lot of liberal praise for the program is way overblown.

In 2009, the state began a program called the Colorado Family Planning Initiative (CPFI) that gave low-income women free or low-cost IUDs and subdermal contraceptive implants, both highly effective but relatively expensive long-acting reversible contraceptives (LARCs).

Said study on the program, published by the Guttmacher Institute, an influential think tank that studies abortion and reproductive health, reported that between 2008 and 2011, the birth rate for low-income teens in the state dropped by 29 percent, and the teen abortion rate dropped by 34 percent. A separate CDC report noted that Colorado’s teen birth rate has decreased by 39 percent over the past four years, while the state government found a 40 percent drop in teen births from 2009 to 2013. “The state attributes three-quarters of the overall decline in the Colorado teen birthrate to the program and said its success had a ripple effect,” the Washington Post reported.

Guttmacher summarized its findings as follows:

The Colorado Family Planning Initiative produced a radical game change in the state: The [long-acting contraceptive] methods it promoted and paid for appeared to contribute to a large decline in fertility among the young, low-income patient population and to a decline in the overall fertility rate among women younger than 25. At the same time, measurable declines occurred in abortion rates, births to young unmarried women with limited education and numbers of infants receiving WIC services. 

While that’s carefully worded, this all overstates the program’s success and influence and ignores the fact that much of these effects probably would have happened anyway.

There were big decreases in both teen abortions and births in the Colorado countries benefiting from the program during its duration — but to say the program directly caused the huge decreases is a simplification that overstates the complicated relationship between contraception, abortions, and births.

Why? The teen abortion rate had been falling dramatically for a significant period of time, and with CFPI, it just kept falling. The study compared 2008 and 2011 abortion rates in counties where CFPI was available — in that time period, abortion rates for 15–19 year olds in those counties decreased from 10.9 per 1,000 women to 7.2 per 1,000, which is indeed a 34 percent decrease.  

In that same time frame, abortion rates in counties without the program decreased from 14.4 to 10.2 per 1,000, a 29 percent decrease.

So how could one attribute the 34 percent decline in abortion rates to the CFPI? Almost the same reduction — about 85 percent of the reduction we saw in CFPI counties — still happened in places where the program wasn’t available.

This makes sense because abortion rates have been dropping steadily for years (including among younger women):

It’s more curious, that the abortion rate for women 20–24 rose slightly in the non-CFPI counties while that statistic dropped noticeably where the program was available. But with the very limited evidence the study presents, we have no idea if this is just due to random variation.

This study does not examine comparable women who happened to get LARCs through the program and those who didn’t, which would make for a rigorous study (of course, such things are often infeasible). Instead, it compares whole counties, where only a smaller subsection of women would have access to the program, and then attributes the changes in their overall abortion statistics to the program.

Moreover, the CFPI and non-CFPI counties aren’t remotely comparable: CFPI was in place in 37 of Colorado’s 64 counties, and those 37 counties contained 95 percent of the state’s population. The non-CFPI counties are quite rural, covering 37 percent of the state’s land mass but containing only 5 percent of the total population. This, in the chart above, is what stands in for a control group, essentially — it shouldn’t be considered any such thing.

Colorado’s teen abortion rate did drop more between 2008 and 2010 than the national teen abortion rate (unfortunately 2011 data isn’t available yet), but just barely — the rest of the country saw a big drop too. According to Guttmacher data, the national teen abortion rate dropped by 17.4 percent while Colorado saw a 25 percent decrease from 2008 to 2010.

The way the program was credited for a drop in teen birth rates is a little bit more complicated, but basically just as bad.

To arrive at the conclusion that the program reduced the teen birth rate for low-income teens by 29 percent, the Guttmacher authors projected rates of low-income teen pregnancy based on previous years, essentially drawing a straight line out into the future (“linear trend lines,” in statistics speak), and then counted any performance below that line as resulting from CFPI:

But these “linear trend lines” based on the three previous years of data aren’t really useful. The authors’ projection shows that births would actually increase a bit during the period CFPI was put in place, despite the fact that, like the abortion rate, the teen birth rate is declining nationally, noticeably and steadily:

So sure, the low-income teen birth rate did decrease relative to previous years, but without a control group, it’s impossible to know what percent of that decrease the contraception program is responsible for.

And it’s clear that Colorado’s program can’t responsible for most of the state’s drop in birth rates when you compare the size of the reported effects to CFPI’s scope. Overall, only 8,435 low-income women received a LARC during the duration of the program. Some back-of-the-envelope calculations given the fertility rates the authors reported for low-income teens show that a maximum of 700 or 800 out of 11,000 or so predicted births, in a given year, were likely to be prevented by the CFPI, which would translate into a maximum 6.8 percent reduction of births in CFPI counties. The birth rate in the counties with the program, remember, dropped by 29 percent — four times as much as the free LARCs could have accounted for.

In fact, Colorado’s overall teen birth rate dropped by 40 percent from 2009 to 2013, 11 percent more than the low-income teen birth rate did from 2009 to 2011, suggesting there are other factors at work here.

There is likely some combination of factors driving the low teen birth rate in Colorado. The state also ramped up training and technical assistance for family-planning clinics and helped them to expand outreach, in addition to the new free provision of LARCs (which was paid for with a $23 million anonymous private donation). It’s also possible that some of the factors decreasing birth rates nationally have had a larger impact in Colorado, though there’s really no way to tell. As Sarah Kliff discusses in Vox, the cause of the national decrease in the teen birth rate is still a mystery, with theories ranging from increased IUD uptake overall to the popularity of the MTV show 16 and Pregnant.

Of course, researchers have limitations: The authors tried to measure the impact of a pre-planned public-health initiative after the fact, not conduct a gold-standard statewide experiment. In that sense, they did a pretty good job, but the study does not justify the headlines it got – it seems likely that the CFPI must have had some positive effects on abortion and birth rates, but it’s far from the policy panacea the headlines depicted. For a study titled  “Game Change in Colorado,” it provided little evidence that the CFPI changed the teen-pregnancy-prevention game at all.

There also remain plenty of unanswered questions about CFPI as health policy: This study doesn’t cover data on discontinuance rates, reinsertion rates, changes in STI transmission, or on many other factors that are important. For instance, it’s possible that since LARCs are effective for a number of years, birth rates could increase again in a few years when the devices expire, especially if women forget to replace them or delay replacement due to cost. Another concern is that LARCs could increase STI transmission because they replace the need for traditional barrier methods to prevent pregnancy.

So when weighing the impact of the results, we should be careful to take the results for what they actually are: an indication that programs like the CFPI increases the uptake of LARCs, in the short term, to a limited degree, which might have positive effects on birth and abortion rates for some people, in some populations, in some places.

Cities, Suburbs, and Families with Children: Preliminary Thoughts



Text  



Recently, Lydia DePillis of the Washington Post contrasted two strategies for U.S. cities looking to grow their populations, drawing on a 2001 report from the Brooking Institution focused on the future of the District of Columbia:

The report set out two paths. The city could cater to the young adult and empty nester demographics, by zoning for large apartment buildings in the downtown core and fostering buzzy entertainment districts. Or it could attempt to retain middle-income families, by investing in schools and incentivizing larger housing units. The former strategy would be a fast way to bring in more people and rapidly expand the tax base. The latter could take a while — and potentially put the city back in fiscal peril.

The reason the latter path could “potentially put the city back in fiscal peril” relates to the concept of the “demographic dividend,” which David E. Bloom, David Canning, and Jaypee Sevilla explore at length in a 2003 Rand Corporation report

Because people’s economic behavior and needs vary at different stages of life, changes in a country’s age structure can have significant effects on its economic performance. Nations with a high proportion of children are likely to devote a high proportion of resources to their care, which tends to depress the pace of economic growth. By contrast, if most of a nation’s population falls within the working ages, the added productivity of this group can produce a “demographic dividend” of economic growth, assuming that policies to take advantage of this are in place. In fact, the combined effect of this large working-age population and health, family, labor, financial, and human capital policies can effect virtuous cycles of wealth creation. And if a large proportion of a nation’s population consists of the elderly, the effects can be similar to those of a very young population. A large share of resources is needed by a relatively less productive segment of the population, which likewise can inhibit economic growth.

You see where I’m going with this: something similar obtains for cities within a given country, and this raises thorny issues for a country like ours in which we have a relatively high degree of fiscal decentralization and free migration. Some cities have large working-age populations as a share of their total populations, and some really fortunate cities have large college-educated working-age populations as a share of their total populations, which makes it easier to finance infrastructure and social services. (Whether these resources will be deployed effectively is a separate matter. Many if not most “superstar cities” are attractive to productive workers not because of the quality of local public services but because of fixed amenities and economic agglomerations that are extremely sticky, thus allowing local public sector workers to extract rents from taxpayers, which helps explain why a city like Los Angeles is governed so poorly.)

One of the issues DePillis raises is that the rising housing prices associated with gentrification in urban cores tend to encourage outmigration from cities to suburbs. She identifies a problem of affluence, which is that when high-income families living in a city deem the local public schools acceptable, homes quickly appreciate in value. Low-income families find it difficult to afford homes in the catchment areas of the most well-regarded urban public schools, and so they will often leave the gentrifying urban core for low-cost housing options in the suburbs. DePillis concludes on the following uninspiring note:

The city’s best chance to keep its population in balance over the long term — bringing in and keeping the wealthy while allowing the poor to stick around — is to build as densely as possible in areas the childless enjoy, which frees up roomier row houses that families prefer. Those big condo buildings can also be constructed to allow for units to be combined, if parents-to-be want a second bedroom and are willing to sacrifice the backyard.

And then, all that tax revenue generated by childless millennials will be enough to keep up with demand for the services that low-income families need to hang on.

While I agree with the strategy DePillis identifies for cities, it’s worth thinking through the dilemmas facing suburban communities, the subject of my next column. For now, I’ll raise one minor issue, which is that low-density communities suffer from a financial productivity problem. Financial productivity, which Charles Marohn defines as the total value per acre, is much higher in dense, multi-use urban environments than in sprawling, single-use environments. When you have densely-packed retail establishments and multi-family housing along a road, you can rest assured both that the road will be used and that the revenues generated by the buildings on either side of it will be more than sufficient to finance its upkeep. When you instead have single-purpose neighborhoods dominated by single-family dwellings, financial productivity tends to fall. There are, of course, affluent suburban communities where densities are low yet where local tax revenues can meet the challenge of financing (limited) local infrastructure. As a general rule, however, these are towns which present high barriers to low- and middle-income households. Combining inclusiveness and financial productivity seems to require density. More on this to come. 

The Wrong Kind of Social Security Reform



Text  



Last week, Andrew Biggs of the American Enterprise Institute discussed the parlous state of Social Security’s finances (“CBO’s best guess is that the Social Security shortfall is roughly four times larger today than it was just six years ago”) and recent calls from left-of-center Democrats for expanding Social Security, and how these proposals are likely to exacerbate rather than improve Social Security’s fiscal health. Biggs acknowledges that President Obama has proposed reducing cost of living adjustments to improve Social Security’s finances before noting that he withdrew the proposal after encountering intense criticism from Democratic lawmakers, several of whom have instead backed an expansion of benefits. A New York blog post takes Biggs to task for not highlighting the role of Republicans in stymieing Social Security reform:

Obama suggested nothing to help Social Security apart from that one time he did exactly that, when he proposed reducing Social Security’s cost-of-living index. (Even this absurd formulation is wrong: Biggs is ignoring the fact that Obama also proposed similar measures behind closed doors in 2011 and 2012, and was rebuked by Republicans every time.)

It is worth noting that the proposal in question does more than reduce Social Security’s cost-of-living index, as Biggs has explained in great detail. The president’s call for adopting chained CPI to calculate Social Security’s annual cost-of-living adjustments (COLAs) was tied to using chained CPI to index income-tax brackets. This would accelerate “bracket creep,” the process through which households find themselves in higher tax brackets because average incomes generally rise faster than inflation. “While the Social Security cuts due to chained CPI would stabilize at around 4 percent of outlays (being limited by the average recipient’s lifetime),” Biggs writes in NRO, ”the income-tax increases would keep growing in perpetuity.” Why would Republicans oppose a cut to Social Security benefits tied to a tax increase that will disproportionately impact low- and middle-income households? 

And Biggs also argues that while there is indeed a case for restraining the growth of Social Security benefits for middle and high earners, applying chained CPI to COLAs is an unusually bad way to accomplish this goal. One of the points Biggs often emphasizes is that advocates of Social Security reform must focus on the central goals of the Social Security program and public pension policies more broadly, e.g., to eliminate poverty among older Americans. Social Security’s generous inflation adjustment is important in part because private pensions aren’t subject to a generous inflation adjustment. But rather than simply leave the Social Security program in its current form, Biggs has proposed reforms that would, among other things, strengthen the role of private savings while making Social Security more generous to beneficiaries with low lifetime earnings. That is, Biggs is not exclusively focused on making Social Security cheaper. His goal is to make it better.

New York continues:

With faux generosity (“It’s hard to blame the president alone for backtracking”), Biggs pivots from blaming Obama to blaming Democrats in Congress. He cites a plan to shore up Social Security by Representative John Larson. Biggs grudgingly concedes that Larson “at least attempts to balance the system’s tax revenues and benefit outlays,” which is Biggs’s way of saying that, according to an independent analysis by the Social Security Administration, Larson’s plan restores complete solvency to the Trust Fund over 75 years. He proceeds to complain that Larson’s plan raises taxes too much.

Of course, this doesn’t contradict Biggs in the slightest. Biggs describes Larson’s plan as “the most responsible bill” from a Democratic lawmaker, and his main objection is in fact that Larson “makes no attempt to hold down costs” — a pretty big point to miss. 

At no point anywhere in his op-ed does Biggs mention Congressional Republicans, not to mention their repeated refusal to accept Obama’s deal that would have cut Social Security spending in return for closing tax deductions for the affluent. He is, of course, correct that many liberals opposed such a deal. But this merely illustrates how self-defeating it was for Republicans to spurn Obama’s deal. Cutting Social Security is extremely unpopular — as unpopular as just about any mainstream policy option. It is also essential to the conservative goal of restraining the size of government. Having a Democrat who is prepared to sign, and provide public cover to, Social Security cuts is an unbelievably fortunate opportunity for the right.

Let’s review: is it in fact “self-defeating” of Republicans to spurn a proposal that accelerates bracket creep while also reducing Social Security outlays in a way that risks undermining one of the better aspects of the Social Security program?

I recommend reading Biggs and Sylvester Schieber’s work on the state of retirement incomes, a subject they’ve addressed in the Wall Street Journal and, at greater length, in National Affairs.

Guide to the Senate Races, Michigan Edition: Rep. Gary Peters (D) v. Terri Lynn Land (R)



Text  



Michigan is generally considered a Democratic-leaning state, and it hasn’t elected a Republican senator since the defeat of Spencer Abraham. This year, however, the seat is considered at least somewhat competitive. The Democratic candidate, Rep. Peters, worked as a financial advisor before joining the U.S. House of Representatives in 2008. He is also a member of the U.S. Navy Reserve. Terri Lynn Land, the Republican candidate, is a small business owner who served as Michigan’s Secretary of State for two terms after working as a county clerk.  

Rep. Gary  Peters (D)   v.  Terri Lynn Land (R) 

       

Recent accusations by Land that Peters supported extending Obamacare to illegal immigrants and has flipped flopped on his immigration position have hurt Peters in polling and made immigration a key issue in the race. Peters’ support for Obamacare and left-of-center position on environmental policy are also sticking points in the campaign against him. Peters has emphasized environmental issues throughout his campaign, pointing out the piles of petcoke (a byproduct of petroleum refining that pollutes the air) left in Detroit by Koch Carbon — a company owned by the Koch brothers, who have been big financial supporters of Land’s candidacy.

Land started out ahead in the spring of 2013, but the candidate with the lead switched several times before Peters emerged as the leader in May 2014. Even with a consistent lead throughout the spring and summer, Peters has yet to pass the 50 percent hurdle. 

 

Today’s Policy Agenda: Regulation Explains a Lot of the Variation in Price of Housing



Text  



Obamacare’s growing costs to businesses is bad for some workers and consumers.

Rove and Co. has a report on the results of the 2014 Empire State Manufacturing Survey and the Business Leaders Survey, conducted by the Federal Reserve Bank of New York, which included a supplement on the effects of Obamacare on businesses.

About 80 percent of manufacturing leaders and 73 percent of business leaders surveyed said they expected the law to increase their costs in 2015, and the chart below shows how they expect to offset the new cost:

Some business and manufacturing employers will decrease the total amount of workers they employ, make more positions part time, and reduce wages. But the most popular choice: 36.4 and 25 percent of manufacturing leaders and business leaders respectively, said prices for consumers would rise. 

Is the skills gap a myth?

At FiveThirtyEight, Andrew Flowers reports on a National Bureau of Economic Research working paper arguing that the U.S. workforce doesn’t lack the skills employers need:

Overall, the available evidence does not support the idea that there are serious skill gaps or skill shortages in the US labor force. The prevailing situation in the US labor market, as in most developed economies, continues to be skill mismatches where the average worker and job candidate has more education than their current job requires.

There is a prevailing narrative that the U.S. labor market has a lot of open positions because workers lack the skills to fill them. But in the NBER paper, Penn professor Peter Cappelli argues that most people are actually over-qualified for their jobs, and suggests other reasons for why employers complain of a skills shortage when there isn’t one.

One of Cappell’s explanations: Employers are perpetuating this narrative to shift the responsibility of skills acquisition away from the on-the-job-training model and to individuals and the government.

The hidden housing construction cost: building regulations

In the Washington Post, Jeff Guo explains how building regulations further raise the cost of housing in some of America’s most expensive cities, citing a study by University of Michigan economists. 

The chart below suggests that much of the difference in housing prices has to do with regulations, not just, say, the price of land and materials:

The line represents the predicted cost of housing based on the land and construction costs in different cities, while the dots represent the actual costs — as you can see, cities like San Francisco are above that line, meaning regulation has raised their prices higher than you’d expect based on fundamentals.

These excessive regulations drive up the cost of housing for everyone: The regulations make new construction cost prohibitive, leading to housing shortages that keep prices high, while developers who need to sell new construction at high costs to compensate for the price of regulations focus on luxury housing.

Today’s Policy Agenda: Congestion Is a Serious Economic Problem



Text  



Obama’s trying to persuade the elite establishment to support executive action on immigration.

Anna Palmer and Carrie Budoff Brown reported in Politico on the Obama administration seeking advice from and courting business leaders for coming executive actions on immigration. They write:

Earlier this month, senior aides from the White House counsel’s office, office of public engagement and the office of science and technology policy, among others, huddled with more than a dozen business groups and company officials to discuss potential immigration policy changes they could make. Smaller meetings with the White House and Department of Homeland Security aides have continued throughout the month. Administration officials are expected to present Obama with recommendations by the end of August.

Representatives from Oracle, Cisco, Fwd.US, Microsoft, Accenture, Compete America and the U.S. Chamber of Commerce were among those present at a wide-ranging Aug. 1 session that went through a list of asks for the tech sector that would involve rulemaking. Executive orders were not specifically discussed in that meeting, according to one source familiar with the session.

The ideas under discussion for executive action include allowing spouses of workers with high-tech visas to work, recapturing green cards that go unused and making technical changes for dual-purpose visa applications. Agriculture industry representatives have also been included in the meetings, discussing tweaks in the existing agriculture worker program.

The administration is also considering provisions for low-skilled workers for industries, like construction, that would allow individuals with temporary work authorization to gain work permits.

While the administration is clearly working to shore up political support for likely unprecedented executive maneuvers, there’s already a consensus in the both parties on most legislative questions. In their new podcast, “​Getting it Right,” Reihan and Patrick observe that George W. Bush and President Obama’s comprehensive immigration reform proposals appear remarkably similar and that elite preferences are fairly uniform regarding legalization and future flows. Reihan and Patrick speculate that this stems from big business – including the firms brought in by the White House to help shape coming immigration law changes – supporting such proposals out of self-interest, while the interests of native low-skill workers or even the preferences of their own constituents have less influence on elites.

Congestion is a serious economic problem.

In Urban Studies, Matthias Sweet from McMaster University explored the economic effects of traffic congestion, and finds that it slows job growth. sweet:

Traffic congestion alleviation has long been a common core transport policy objective, but it remains unclear under which conditions this universal byproduct of urban life also impedes the economy. Using panel data for 88 US metropolitan statistical areas, this study estimates congestion’s drag on employment growth (1993 to 2008) and productivity growth per worker (2001 to 2007). Using instrumental variables, results suggest that congestion slows job growth above thresholds of approximately 4.5 minutes of delay per one-way auto commute and 11,000 average daily traffic (ADT) per lane on average across the regional freeway network. While higher ADT per freeway lane appears to slow productivity growth, there is no evidence of congestion-induced travel delay impeding productivity growth. Results suggest that the strict policy focus on travel time savings may be misplaced and, instead, better outlooks for managing congestion’s economic drag lie in prioritizing the economically most important trips (perhaps through road pricing) or in providing alternative travel capacity to enable access despite congestion.

Given conservatives’ emphasis on work and spending time with family, fighting traffic congestion is a natural fit in a policy agenda. Replacing the gas tax with mileage-based user fees varied by time of day, as explained by Brad Wassink and Rick Geddes, could raise more revenue in a progressive manner while incentivizing drivers to avoid creating congestion. It would also make the costs of government more visible to the taxpayer than excise taxes included in the price at the pump. And all of it, it seems, could give the economy a boost too.

The costs alcohol imposes on society are incredibly high.

For Wonkblog, University of Chicago professor Harold Pollack recalls memories of serving as a health researcher around those who had been drinking and looks at the data to show how deadly alcohol can be:

Surveys of people incarcerated for violent crimes indicate that about 40% had been drinking at the time they committed these offenses. Among those who had been drinking, average blood-alcohol levels were estimated to exceed three times the legal limit. Drinking is especially common among perpetrators of specific crimes, including murder, sexual assault, and intimate partner violence.

Correlation does not equal causation, of course. If offenders all stopped drinking, we wouldn’t see a 100-percent reduction in their crimes. Yet alcohol does play a distinctive role. It lowers inhibitions and, among some people, fosters aggressive behavior that ratchets up the risk that violence will somehow occur. In my own career as a public health researcher, I’ve come into close contact with many intoxicated heroin and marijuana users. In these moments, I’ve never had reason to feel that my safety was at risk. I have been present for some scary incidents. Almost every time, alcohol was in the mix, often as things were getting a little late in a tough neighborhood near a liquor store . .  .

Almost 40% of the homicide victims tested had some blood alcohol in their systems when they were killed. These data do not indicate actual blood-alcohol levels. Our previous work indicates that many homicide victims have alcohol in their systems above the legal limits for driving.

Reihan has advocated for a significant increase in the alcohol tax before, and the logic for it is clear in this context: Alcohol is extremely expensive to society – in lives lost, domestic abuse committed, health-care dollars and resources spent, and families torn apart. Raising the alcohol tax would put consumers of the drug in a position where they bear the costs alcohol imposes on the rest of society. The price is the mechanism that best transmits information to the consumer that this must be used in moderation.

The alcohol experience also provides important lessons for the more politically relevant drug of the day: marijuana. While prohibition of alcohol was a failure and a strong case can be made that the drug war has also been a failure, that’s not to say that following alcohol’s path of relatively light taxation and regulation is the right course for marijuana. The data in Pollack’s story is pretty clear on the ways a liberalized alcohol policy has been harmful, and the ideas of Mark Kleiman – legalize, but heavily tax and regulate to mitigate abuses of the drug – offer a different path that could have better results. 

A Monetary Policy for the 21st Century



Text  



Mark Blyth, a professor of international political economy at Brown and author of Austerity: The History of a Dangerous Idea, and Eric Lonergan, a hedge fund manager and author of Money, have a provocative article in the new Foreign Affairs that calls for the use of “helicopter drops” as a tool of monetary policy. As a fan of the London-based entrepreneur and writer Ashwin Parameswaran, a longtime proponent of helicopter drops, I was delighted to see the idea given such a prominent place in an influential magazine. What impresses me most about Blyth and Lonergan’s article is that unlike many other critics of austerity, they recognize that austerity isn’t just about spending cuts; it is also about tax increases. And they make a number of arguments that at least have the potential to appeal to right-of-center skeptics of quantitative easing and calls for debt-financed infrastructure investment as a recession-fighting tool.

First, the basic mechanism: 

Rather than trying to spur private-sector spending through asset purchases or interest-rate changes, central banks, such as the Fed, should hand consumers cash directly. In practice, this policy could take the form of giving central banks the ability to hand their countries’ tax-paying households a certain amount of money. The government could distribute cash equally to all households or, even better, aim for the bottom 80 percent of households in terms of income. Targeting those who earn the least would have two primary benefits. For one thing, lower-income households are more prone to consume, so they would provide a greater boost to spending. For another, the policy would offset rising income inequality.

Such an approach would represent the first significant innovation in monetary policy since the inception of central banking, yet it would not be a radical departure from the status quo. Most citizens already trust their central banks to manipulate interest rates. And rate changes are just as redistributive as cash transfers. When interest rates go down, for example, those borrowing at adjustable rates end up benefiting, whereas those who save — and thus depend more on interest income — lose out.

My own view is that given anxieties about the politicization of central banks, which Blyth and Lonergan acknowledge, it would be preferable to distribute cash equally to all households, on the grounds that such an approach is more “neutral,” and that it can safely ignore income fluctuations and disincentives (minor though they might be) at the margin. Those who are drawn to Blyth and Lonergan’s approach on egalitarian grounds might object to such a universal transfer, but they shouldn’t, as it is an alternative to the far more inegalitarian quantitative easing approach, the main effect of which is to prop up asset prices. As Parameswaran has argued, the chief benefit of helicopter drops is that instead of propping up asset prices (and bailing out big banks and business enterprises, a subject to which we will return), they “mitigate the consequences of macroeconomic volatility upon the people.” While quantitative easing and bailouts disproportionately benefit the asset-owning rich, helicopter drops leave the household income distribution untouched, leaving the question of redistribution to lawmakers. All that said, Blyth and Lonergan’s favored approach, in which households in the top fifth are excluded from the transfer, strikes me as preferable to the status quo. 

But wouldn’t helicopter drops create inflationary pressures? Blyth and Lonergan argue that inflation fears are overblown, as helicopter drops would be a flexible tool. Any inflationary pressures they generate could be mitigated by an increase in interest rates. And they make a compelling case that instead of fretting about inflation, there are good structural reasons for central banks to worry about the prospect of deflation:

[I]n recent years, low inflation rates have proved remarkably resilient, even following round after round of quantitative easing. Three trends explain why. First, technological innovation has driven down consumer prices and globalization has kept wages from rising. Second, the recurring financial panics of the past few decades have encouraged many lower-income economies to increase savings — in the form of currency reserves — as a form of insurance. That means they have been spending far less than they could, starving their economies of investments in such areas as infrastructure and defense, which would provide employment and drive up prices. Finally, throughout the developed world, increased life expectancies have led some private citizens to focus on saving for the longer term (think Japan). As a result, middle-aged adults and the elderly have started spending less on goods and services. These structural roots of today’s low inflation will only strengthen in the coming years, as global competition intensifies, fears of financial crises persist, and populations in Europe and the United States continue to age. If anything, policymakers should be more worried about deflation, which is already troubling the eurozone.

And Blyth and Lonergan appeal to legitimate concerns about the scale of asset purchases by noting that the cash transfers they envision would be modest in comparison:

There is no need, then, for central banks to abandon their traditional focus on keeping demand high and inflation on target. Cash transfers stand a better chance of achieving those goals than do interest-rate shifts and quantitative easing, and at a much lower cost. Because they are more efficient, helicopter drops would require the banks to print much less money. By depositing the funds directly into millions of individual accounts — spurring spending immediately — central bankers wouldn’t need to print quantities of money equivalent to 20 percent of GDP.

Later in their article, Blyth and Lonergan offer an intriguing scheme for debt-financed sovereign wealth funds (SWFs) as an alternative to the global wealth taxation envisioned by Thomas Piketty. Here is where Blyth and Lonergan repeatedly play against type by, for example, warning that “taxes on capital would discourage private investment and innovation” — a banal sentiment, you’d think, yet one that is far from universal among anti-austerians. They reference France’s recent budget problems to suggest that burdening upper-middle class households in the highest income tax brackets “would yield little financial benefit,” another provocative claim in some circles. And they explicitly contrast their call for new SWFs with talismanic, and often intellectually sloppy, calls for increased government-financed infrastructure spending. After noting that “infrastructure spending takes too long to revive an ailing economy,” they insist that infrastructure investments “shouldn’t be rushed” before noting the wastefulness of some infrastructure projects, an aside that came as music to at least my ears. The particulars of these debt-financed SWFs will give some critics pause, and I’d be eager to read a critical take:

[I]nstead of trying to drag down the top, governments could boost the bottom. Central banks could issue debt and use the proceeds to invest in a global equity index, a bundle of diverse investments with a value that rises and falls with the market, which they could hold in sovereign wealth funds. The Bank of England, the European Central Bank, and the Federal Reserve already own assets in excess of 20 percent of their countries’ GDPs, so there is no reason why they could not invest those assets in global equities on behalf of their citizens. After around 15 years, the funds could distribute their equity holdings to the lowest-earning 80 percent of taxpayers. The payments could be made to tax-exempt individual savings accounts, and governments could place simple constraints on how the capital could be used.

For example, beneficiaries could be required to retain the funds as savings or to use them to finance their education, pay off debts, start a business, or invest in a home. Such restrictions would encourage the recipients to think of the transfers as investments in the future rather than as lottery winnings. The goal, moreover, would be to increase wealth at the bottom end of the income distribution over the long run, which would do much to lower inequality.

Here Blyth and Lonergan anticipate the objection that public sector purchases of financial assets risks deepening state control over private firm, an objection that was often raised when politicians contemplated invested Social Security funds in equities, by noting that central banks already have enormous asset portfolios. But then I worry that they might be too sanguine about the long-term consequences:

Best of all, the system would be self-financing. Most governments can now issue debt at a real interest rate of close to zero. If they raised capital that way or liquidated the assets they currently possess, they could enjoy a five percent real rate of return — a conservative estimate, given historical returns and current valuations. Thanks to the effect of compound interest, the profits from these funds could amount to around a 100 percent capital gain after just 15 years. Say a government issued debt equivalent to 20 percent of GDP at a real interest rate of zero and then invested the capital in an index of global equities. After 15 years, it could repay the debt generated and also transfer the excess capital to households. This is not alchemy. It’s a policy that would make the so-called equity risk premium — the excess return that investors receive in exchange for putting their capital at risk — work for everyone.

As we contemplate the aging of developed world populations, technological advances that will continue to put pressure on market wages, and the growing temptation of elected officials to embrace rigid regulations as policy “solutions,” the effort to preserve free and open economies will require new strategies

My main criticism of Blyth and Lonergan is that they ought to have emphasized the role of helicopter drops as an alternative to bank bailouts, a point that Parameswaran emphasized in “A Simple Policy Program for Macroeconomic Resilience” (in which he also usefully differentiates between helicopter drops as tools for macroeconomic stabilization and basic income guarantees, which are conceptually distinct):

In order to promote system resilience and minimise moral hazard, any system of direct transfers must be directed only at individuals and it must be a discretionary policy tool utilised only to mitigate against the risk of systemic crises. The discretionary element is crucial as tail risk protection directed at individuals has minimal moral hazard implications if it is uncertain even to the slightest degree. Transfers must not be directed to corporate entities – even uncertain tail-risk protection provided to corporates will eventually be gamed. The critical difference between individuals and corporates in this regard is the ability of stockholders and creditors to spread their bets across corporate entities and ensure that failure of any one bet has only a limited impact on the individual investors’ finances. In an individual’s case, the risk of failure is by definition concentrated and the uncertain nature of the transfer will ensure that moral hazard implications are minimal. This conception of transfers as a macro-intervention tool is very different from ideas that assume constant, regular transfers or a steady safety net such as an income guarantee, job guarantee or a social credit.

The argument for bank bailouts is that they are necessary to prevent a catastrophic deflationary collapse. Yet direct transfers to individuals can do that just as well, if not better. And so banks can be allowed to fail, clearing the ground for new banks to emerge in their place.  If Blyth and Lonergan are seeking to build a broad coalition for their proposals, and I think they are, pressing the case against bank bailouts would be a good place to start. 

Should We ‘Tape Everything’?



Text  



Last week, I argued that on-duty police officers should be required to record their interactions with civilians with the aid of so-called “body cams” and, more controversially, that teachers should be recorded in the classroom. Though I lumped these two arguments together, they deserve to be teased apart.

First, I should note that I fell prey to technological triumphalism. The “hardware” of body cams can improve our criminal justice system. But what really matters is the cultural ”software” that undergirds the system. 

The case for police body cams is, for the reasons outlined in the column, fairly strong. Yet they’re certainly not a cure-all. As Radley Balko observes, it is not uncommon for police departments to have cameras and to not use them, or for cameras to malfunction at convenient moments:

So in addition to making these videos public record, accessible through public records requests, we also need to ensure that police agencies implement rules requiring officers to actually use the cameras, enforce those rules by disciplining officers when they don’t and ensure that the officers, the agencies that employ them, and prosecutors all take care to preserve footage, even if the footage reflects poorly on officers.

Assuming law enforcement agencies are using recording equipment properly, we then have to deal the problem of “cultural cognition,” which Dan M. Kahan, David A. Hoffman, Donald Braman, Danieli Evans, and Jeffrey J. Rachlinski address in an April 2012 Stanford Law Review article, which Josh Chafetz of Cornell Law School kindly sent my way:

“Cultural cognition” refers to the unconscious influence of individuals’ group commitments on their perceptions of legally consequential facts. We conducted an experiment to assess the impact of cultural cognition on perceptions of facts relevant to distinguishing constitutionally protected “speech” from unprotected “conduct.” Study subjects viewed a video of a political demonstration. Half the subjects believed that the demonstrators were protesting abortion outside of an abortion clinic, and the other half that the demonstrators were protesting the military’s “don’t ask, don’t tell” policy outside a military recruitment center. Subjects of opposing cultural outlooks who were assigned to the same experimental condition (and thus had the same belief about the nature of the protest) disagreed sharply on key “facts”—including whether the protestors obstructed and threatened pedestrians. Subjects also disagreed sharply with those who shared their cultural outlooks but who were assigned to the opposing experimental condition (and hence had a different belief about the nature of the protest). These results supported the study hypotheses about how cultural cognition would affect perceptions pertinent to the speech-conduct distinction. We discuss the significance of the results for constitutional law and liberal principles of self-governance generally.

In a similar vein, it is easy to imagine that jurors reviewing a body cam recording of a police confrontation with a civilian would bring their “cultural cognition” to bear. In a case involving, say, a white police officer and an African American civilian, much could depend (alas) on the racial composition of the jury pool. Alex Tabarrok summarizes the work of Shamena Anwar, Patrick Bayer, and Randi Hjalmarsson on the impact of race on the outcome of criminal trials:

What the authors discover is that all white juries are 16% more likely to convict black defendants than white defendants but the presence of just a single black person in the jury pool equalizes conviction rates by race. The effect is large and remarkably it occurs even when the black person is not picked for the jury. The latter may not seem possible but the authors develop an elegant model of voir dire that shows how using up a veto on a black member of the pool shifts the characteristics of remaining pool members from which the lawyers must pick; that is, a diverse jury pool can make for a more “ideologically” balanced jury even when the jury is not racially balanced.

The author’s results show not only that blacks and whites are treated differently depending on the composition of the jury pool but also that random variation in the jury pool adds to the variability of sentences holding race constant. Like is not treated as like. The results also suggest that we don’t need racial quotas to increase fairness. We can increase fairness and reduce variability in a racially neutrally way by expanding the size of juries. Six-person juries have become common because they are cheap(er) but a return to twelve person juries would reduce the variability of sentences and greatly equalize conviction rates across race. [Emphasis added]

These findings about jury trials reminded me of Russ Roberts’ recent conversation with Barry Weingast, in which Weingast, a student of legal history, described the juries of ancient Athens. These juries were absurdly large by modern standards, with 201 jurors for a trial. These jurors would simply vote on the outcome of a trial after hearing the arguments of the two litigants. The reason for these large juries, according to Weingast, is that the goal of the law was not just to establish rules of conduct, but to establish rules of conduct that allow for the coordination of people’s expectations. And so it is important to understand what are the shared expectations in our society. A small jury could include a handful of eccentrics who don’t have a good handle on societal expectations. A large one, however, would give you a much clearer picture of the expectations of your typical Athenian. Something similar should apply, I would argue, in our own society. Stephanos Bibas’s The Machinery of Criminal Justice reminds us that something similar did apply in colonial America:

Colonial Americans saw criminal justice as a morality play. Victims initiated and often prosecuted their own cases pro se (without lawyers), and defendants often defended themselves pro se. Laymen from the neighborhood sat in judgment as jurors, and even many judges lacked legal training. Trials were very quick, common-sense moral arguments, as victims told their stories and defendants responded without legalese. Communities were small, so gossip flew quickly, informing neighbors of what was going on. Even punishment was a public affair, with gallows and stocks in the town square. True, punishments could be brutal, procedural safeguards were absent, and race, sex, and class biases all clouded the picture. Nonetheless, the colonists had one important asset that we have lost: members of the local community actively participated and literally saw justice done.

The point of jury trials was to empower communities, and to respect their values. In a more diverse society, there is a logic to ensuring that juries reflect this diversity. Among other things, this will tend to strengthen the legitimacy of law enforcement in diverse communities, which, as recent surveys remind us, is at a dangerously low level. As you can probably tell, I’m very interested in this subject and I’d like to revisit it.

On an entirely different note, I oversimplified the issue of recording teachers in their classrooms, as an acquaintance reminded me over email. Such recordings could help establish the facts surrounding disciplinary actions, which does strike me as valuable in itself. Yet the existence of these recordings creates the danger that teachers will be reduced to automatons as they are forced to follow narrow prescriptions as to what they can say and do. I still believe that the recording of teachers could be used as a valuable pedagogical tool, particularly if the recordings are only available to teachers, their colleagues, and their supervisors. But the mere existence of these recordings raises the danger that, for example, litigious parents might demand access to them. 

Pages

Subscribe to National Review