The Agenda

What If GDP Growth Remains Stubbornly Low?

Robert J. Gordon offers yet another pessimistic assessment of America’s future growth prospects in his latest NBER working paper. While the CBO projects that U.S. GDP will grow at an average annual rate of 2.2 percent over the next decade, Gordon estimates that the economy will instead grow at a rate of 1.6 percent:

Forecasts for the two or three years after mid-2014 have converged on growth rates of real GDP in the range of 3.0 to 3.5 percent, a major stepwise increase from realized growth of 2.1 percent between mid-2009 and mid-2014. However, these forecasts are based on the demand for goods and services. Less attention has been paid to how the accelerated growth of real GDP will be supplied. Will the unemployment rate, which has declined at roughly one percent per year, decline even faster from 6.1 percent in June, 2014 to 3.0 percent or below in 2017? Will the supply-side support for the demand-side optimism be provided instead by a major rebound of productivity growth from the average of 1.2 percent over the past decade and 0.6 percent for the last four years, or perhaps by a reversal of the minus 0.8 percent growth rate since 2007 of the labor-force participation rate?

The paper develops a new and surprisingly simple method of calculating the growth rate of potential GDP over the next decade and concludes that projections of potential output growth for the same decade in the most recent reports of the Congressional Budget Office (CBO) are much too optimistic. If the projections in this paper are close to the mark, the level of potential GDP in 2024 will be almost 10 percent below the CBO’s current forecast. Further, the new potential GDP series implies that the debt/GDP ratio in 2024 will be closer to 87 percent than the CBO’s current forecast of 78 percent.

This paper also has profound implications for the Federal Reserve. The unemployment rate has declined rapidly, particularly within the last year. Faster real GDP growth will accelerate the decline in the unemployment rate and soon reduce it beyond any estimate of the constant-inflation NAIRU, even if productivity growth experiences a rebound and the labor force participation rate stabilizes. The macro economy is on a collision course between demand-side optimism and supply-side pessimism.

Three issues immediately come to mind. 

The first is that Gordon’s pessimistic thesis rests in part on a plausible account of the rate of improvement in labor quality. For much of the twentieth century, the educational attainment of the U.S. workforce steadily increased. It now appears to have plateaued, for a variety of reasons. Gordon’s case is strengthened by the fact that less-skilled immigration appears to have lowered the average skill level of the U.S. workforce. As Mary Alice McCarthy has observed, “one in six U.S. adults lack basic literacy and numeracy skills, compared with one in twenty in Japan.” The presence of less-skilled immigrants in the workforce could allow other workers to invest more in their human capital, by facilitating the outsourcing of household production. But it is not at all clear that this effect fully outweighs the direct impact of less-skilled immigration on average labor quality, and the indirect impact that flows from the fact that the children of less-skilled immigrants tend to have low rates of educational attainment relative to the U.S. average. One obvious way to address concerns about labor quality would be to embrace an immigration policy that increases rather than decreases the average skill level of the U.S. workforce. (Alternatively, one might conclude that the dampening effect of less-skilled immigration on growth in GDP per capita is not reason enough to oppose it as it is fully compatible with growth in “income per natural.”)   

The second is that, as Scott Winship argues in “The Affluent Economy,” even sluggish growth from a high base implies substantial absolute gains. 

The third is the question of “digital dark matter,” which Shane Greenstein and Frank Nagle discuss in a new paper:

Researchers have long hypothesized that research outputs from government, university, and private company R&D contribute to economic growth, but these contributions may be difficult to measure when they take a non-pecuniary form. The growth of networking devices and the Internet in the 1990s and 2000s magnified these challenges, as illustrated by the deployment of the descendent of the NCSA HTTPd server, otherwise known as Apache. This study asks whether this experience could produce measurement issues in standard productivity analysis, specifically, omission and attribution issues, and, if so, whether the magnitude is large enough to matter. The study develops and analyzes a novel data set consisting of a 1% sample of all outward-facing web servers used in the United States. We find that use of Apache potentially accounts for a mismeasurement of somewhere between $2 billion and $12 billion, which equates to between 1.3% and 8.7% of the stock of prepackaged software in private fixed investment in the United States and a very high rate of return to the original federal investment in the Internet. We argue that these findings point to a large potential undercounting of the rate of return from IT spillovers from the invention of the Internet. The findings also suggest a large potential undercounting of “digital dark matter” in general.

Greenstein has also cited Wikipedia, where U.S. households spend more than half of 1 percent of their time online, as another “visible piece of digital dark matter.” Elsewhere, Joel Mokyr offers a closely related argument:

[E]conomists are trained to look at aggregate statistics like GDP per capita and measure for things like “factor productivity.” These measures were designed for a steel-and-wheat economy, not one in which information and data are the most dynamic sectors. They mismeasure the contributions of innovation to the economy.

Many new goods and services are expensive to design, but once they work, they can be copied at very low or zero cost. That means they tend to contribute little to measured output even if their impact on consumer welfare is very large. Economic assessment based on aggregates such as gross domestic product will become increasingly misleading, as innovation accelerates. Dealing with altogether new goods and services was not what these numbers were designed for, despite heroic efforts by Bureau of Labor Statistics statisticians.

The aggregate statistics miss most of what is interesting. Here is one example: If telecommuting or driverless cars were to cut the average time Americans spend commuting in half, it would not show up in the national income accounts—but it would make millions of Americans substantially better off. Technology is not our enemy. It is our best hope. If you think rapid technological change is undesirable, try secular stagnation.

Mokyr’s point is well taken. It could be that Gordon’s pessimistic assessment of U.S. growth potential rests on a failure to fully account for the value created by various new technologies. Yet part of the reason economists are trained to look at aggregate statistics is that these statistics serve as a guide for policymakers. By referencing the fact that cutting the time Americans spend commuting in half would have no impact on GDP, Mokyr is highlighting the limitations of GDP — clearly cutting commutes in half would make Americans better off. Yet unless shorter commutes lead Americans to work commensurately longer hours, shorter commutes will not lead to an increase in taxable income that can then be used to finance, say, military expenditures or redistribution. 

And this is important to understand. Many new technologies make us better off in ways that can’t be taxed away. Consider some of the recent research on “overearning,” which has been ably summarized by Dave Nussbaum:

Are people working too much? A psychological researcher trying to determine the answer would first need to ask, how would we even know? If someone works a lot, she may have perfectly good reasons. Perhaps she wants to save enough money to retire comfortably, to have a cash cushion for emergencies, or to have money to pass on to heirs. To answer the question, [Christopher] Hsee and his colleagues tried to strip away all complicating factors in order to study the underlying psychology.

Doing this is “an advantage rather than a defect or compromise,” says Hsee. His rationale is that if you remove the reasons people have to earn more money and yet still find that they overearn, “that shows the overearning is real.”

Nussbaum goes on to describe the experiment, which is quite clever. Let’s say productive workers across the country decided to stop ”overearning” — what would happen to tax revenues? Nothing good, I suspect. I’m reminded of Eli Dourado’s contrast between technologies of control and technologies of resistance. As governments grow more intrusive, more effort is directed away from the “brute maximization of production” and towards “producing things that we already know how to produce in ways that have ancillary benefits,” like evading control. Or, in a related vein, a person might decide that (say) the interaction of means-tested subsidies and taxes is such that its best to cultivate a preference for non-pecuniary goods, which are arguably more abundant than ever, as Tyler Cowen maintains.

So a few things could be true at the same time: Robert J. Gordon could be right that U.S. GDP growth potential will be quite low for the foreseeable future; growth in GDP per capita could be understating the extent to which quality of life is improving; the federal government’s fiscal capacity could be constrained by sluggish GDP growth, which in turn might limit future increases in redistribution (per Winship, redistribution could still increase in absolute terms, but the scope will be limited all the same); and the key to life might be learning to love things that are free or cheap. When we talk about the importance of education as a policy matter, we generally focus on how educational attainment allows people to become better producers; but perhaps we ought to think more about how it might allow people to become “better” consumers (see Michael Schrage  Who Do You Want Your Customers to Become? for more on a related idea). Eventually, we might come to believe that the chief injury caused by a lackluster education is not so much the way it reduces earning potential but rather the way it limits our ability to appreciate free or cheap stuff, like writing. This sounds rather self-serving, doesn’t it? I’m not totally sure I buy it myself.

Reihan Salam is president of the Manhattan Institute and a contributing editor of National Review.
Exit mobile version