During the Cold War, when rivalry between the capitalist West and communist East shaped international affairs, nonaligned countries were labeled as Third World. A rather vague designation, it came to denote low-income countries with weak states and institutions. When the Soviet Union fell, “developing” became the dominant modifier for such countries. The “first world” — the U.S. and its allies — became, on the other hand, “developed countries.”
The implication was that “development” — economic growth, technological progress, increasing living standards — could take place only in low-income economies. Accordingly, globalization shaped the post–Cold War period. Western policymakers sought to export technologies and governance practices to the undeveloped world, which would spur a convergence in global living standards and growth rates.
The “advanced” countries of the West, by comparison, had supposedly reached steady state. The U.S. and Western Europe could grow only by increasing their populations (which, owing to low native fertility, required immigration) or investing overseas. Developed countries thus paradoxically depended on their undeveloped counterparts. This view took for granted that the developed world was no longer capable of achieving the technological advances that drove economic growth in the 20th century.
Despite Silicon Valley’s public-relations efforts, which tout the transformative potential of new software, more and more thinkers argue that we are experiencing technological stagnation. Citing disappointing productivity numbers and the comparatively low impact of recent information-technology innovations, Peter Thiel, Tyler Cowen, Larry Summers, and others have made this case in recent years, but theories abound as to why it is happening. On one popular view, expressed most comprehensively by Robert Gordon of Northwestern University, Western researchers have picked all the technological “low-hanging fruit,” such as indoor plumbing, automobiles, and air travel. According to this theory, there are diminishing returns to science; once you’ve discovered fire and electricity, all future innovations will pale in comparison.
Economists Jay Bhattacharya and Mikko Packalen push back on this view in a new paper. “New ideas no longer fuel economic growth the way they once did,” they acknowledge, but rather than resulting from the laws of physics, the dearth of new ideas is a consequence of the incentives faced by scientists.
Because academic papers are evaluated by how many citations they receive, scientists choose low-risk projects that are certain to get attention rather than novel experiments that may fail. Academics cluster into crowded fields because papers in such fields are guaranteed to be read by a high number of researchers.
This is a relatively new phenomenon, as citation analysis of scientific research was introduced only in the 1950s and did not become common until the 1970s. Eugene Garfield, who developed the idea of using citation quantity to evaluate the impact of journals, came to regret its use as a performance indicator for individual researchers.
Novel ideas are inherently unlikely to score well on measures of scientific impact, “since ideas develop slowly in their infancy,” the authors assert. The lag between “exploration” of a new idea and the discovery of its impact means that innovators will have to wait years before seeing the fruits of their labor (see graph below).
More likely, though, scientists exploring new ideas will fail to produce meaningful results at all. For every breakthrough idea, there are countless dead ends. A paper in the American Sociological Review concludes, “An innovative publication is more likely to achieve high impact than a conservative one, but the additional reward does not compensate for the risk of failing to publish.” But when researchers neglect new ideas, innovation cannot take place.
CRISPR gene editing, one of few recent breakthroughs in biotechnology, was developed by scientists over a 20-year period. When the DNA sequences behind CRISPR were first discovered in 1987, their significance was unclear. Over the ensuing decades, papers on CRISPR that are now considered major advances were rejected by leading journals. Only after 25 years of tinkering, with few tangible results, did scientists discover the use of CRISPR DNA segments for genome editing.
Had Yoshizumi Ishino, who discovered the CRISPR sequence, prioritized impact over exploration, CRISPR would not have been possible. But Ishino is the exception that proves the rule. Bhattacharya and Packalen find that the vast majority of researchers aim for incremental advances.
They are not the first to present this hypothesis, but by formalizing it in a model, they have demonstrated the paradoxical dynamics behind scientific breakthroughs: The most important ideas are least likely to be recognized in their nascent states.
Bhattacharya and Packalen propose that “impact factors,” such as volume of research and number of citations, be coupled with “edge factors,” which measure the likelihood that a given paper will lead to a breakthrough. The “edge factor” would take the form of a textual analysis that combs papers for terms indicating novelty. By using this more holistic standard, the scientific community would make more funding and recognition available for potential breakthroughs.
Such a shift would require a broad reevaluation of the model employed in U.S. research funding. At present, the federal science-funding apparatus resembles a private-equity firm, which targets a specific rate of return by making stable, boring investments. If government agencies incorporated the venture-capital model, which funds numerous moonshots in the hope that one will succeed, they might help us escape this morass.
If the U.S. wants to maintain its technological edge, it must incentivize risk-taking and reward pioneers.