The Agenda

Thoughts on Innovation Policy

Ross Eisenbrey of the Economic Policy Institute argues that government and not the private sector has been the chief driver of what he calls “breakthrough innovation,” and he draws on the life of Douglas Englebart to make his case:

Englebart was a visionary, but his ground-breaking work was not supported by venture capital and his innovations were not the result of the private market or corporate enterprise. His innovations were not spurred by the prospects of incredible income and wealth, all lightly taxed. Rather, the work was funded and organized by a visionary bureaucracy in the U.S. government. As the Times describes it, “during the Vietnam War, he established an experimental research group at Stanford Research Institute (later renamed SRI and then SRI International). The unit, the Augmentation Research Center, known as ARC, had the financial backing of the Air Force, NASA and the Advanced Research Projects Agency, an arm of the Defense Department.”

Mariana Mazzucato, a professor of economics at the University of Sussex, has been making the point very effectively in lectures and a new book, The Entrepreneurial State, that the real innovation engine in the global economy is not business, nor the market, but the government.

Among other things, Mazzucato observes that the core technologies that power the iPhone (“capacitive sensors, solid-state memory, the click wheel, GPS, internet, cellular communications, Siri, microchips, touchscreen”) have their origins in research efforts financed by the U.S. military. And so Mazzucato favors a tax-financed, state-led approach to innovation:

Mazzucato suggests that, given the extent to which tech companies like Apple and Intel owe their great good fortune to the federal government’s investment in R&D, they should share more of their profits with the taxpayers. Instead, of course, Apple has been offshoring profits to avoid taxation and most of the tech industry is contributing to the efforts of the U.S Chamber of Commerce and the rest of the organized business lobby to cut corporate taxes and shrink the government. As Mazzucato makes clear, cutting taxes and the government is no recipe for an innovative, competitive future—just the opposite. 

Not surprisingly, I disagree with the general thrust of Eisenbrey and Mazzucato’s take. Consider, for example, the extraordinary success of Samsung, the Korean multinational that, among other things, has developed a series of successful iPhone competitors. Samsung has a long history of entanglement with the Korean state, which might to reinforce Eisenbrey and Mazzucato’s basic point. It is also true that Samsung — which represents as much as one-fifth of South Korean GDP — makes use of capacitative sensors, solid-state memory, the click wheel, GPS, internet, cellular communications, Siri, microchips, and touchscreens, much like Apple. But this presumably doesn’t mean that Samsung ought to pay the U.S. federal government some kind of tribute. As Amar Bhidé often notes, an Englishman pioneered the World Wide Web under the auspices of the government-financed CERN laboratory in Switzerland, yet the U.S. has been the main source of consumer internet innovation. U.S. internet firms do not, however, pay the Swiss and other European governments a formal innovation bounty. Part of the reason is that everyone profits from the free flow of knowledge, which is why excessive patents are such an economic scourge. The U.S. government devised the technologies Mazzucato identifies for its own, usually defense-oriented reasons. Mazzucato implicitly suggests that in a counterfactual universe in which the Cold War had never taken place, and in which defense expenditures hadn’t diverted spending from other domains or forced higher tax levels, etc., innovations in information technology would not have taken place either. The decades that preceded the Cold War, during which there was considerable private sector innovation in early information technologies, suggests that this is not the case, but of course we can’t really say. What we do know is that in our world, incremental innovations by private firms made the defense-oriented technologies that power smartphones more useful over time. What we are dealing with is a complex innovation ecosystem, in which the government undoubtedly plays a role. Yet to characterize the government’s role as more important or more essential strikes me as a mistake.

To explain why, it’s important to first think through what exactly we mean by “breakthrough innovation” and whether or not breakthrough innovations are what matter most for economic development. Bhidé offers a useful framework for differentiating among different kinds of innovation:

Innovation involves the development of new products or processes and the know-how that begets them. New products can take the form of high-level building blocks or raw materials (for example, microprocessors or the silicon of which they are made), midlevel intermediate goods (motherboards with components such as microprocessors), and ground-level final products (such as computers). Similarly, the underlying know-how for new products includes high-level general principles, midlevel technologies, and ground-level, context-specific rules of thumb. For microprocessors, this know-how includes the laws of solid-state physics (high level), circuit designs and chip layouts (midlevel), and the tweaking of conditions in semiconductor fabrication plants to maximize yields and quality (ground level).

Technological innovations, especially high-level ones, usually have limited economic or commercial importance unless complemented by lower-level innovations. Breakthroughs in solid-state physics, for example, have value for the semiconductor industry only if accompanied by new microprocessor designs, which themselves may be largely useless without plant-level tweaks that make it possible to produce these components in large quantities. A new microprocessor’s value may be impossible to realize without new motherboards and computers, as well.

New know-how and products also require interconnected, nontechnological innovations on a number of levels. A new diskless (thin-client) computer, for instance, generates revenue for its producer and value for its users only if it is marketed effectively and deployed properly. Marketing and organizational innovations are usually needed; for example, such a computer may force its manufacturer to develop a new sales pitch and materials and its users to reorganize their IT departments.

Government-financed technological innovations in one country or region, in other words, can travel to another, provided the second country or region has the requisite amount of ”absorptive capacity”:

The willingness and ability of lower-level players to create new know-how and products is at least as important to an economy as the scientific and technological breakthroughs on which they rest. Without radio manufacturers such as Sony, for instance, transistors might have remained mere curiosities in a lab. Maryland has a higher per capita income than Mississippi not because Maryland is or was an extremely significant developer of breakthrough technologies but because of its greater ability to benefit from them. Conversely, the city of Rochester, New York—home to Kodak and Xerox—is reputed to have one of the highest per capita levels of patents of all US cities. It is far from the most economically vibrant among them, however.

In a similar vein, many of the government-backed technologies identified by Mazzucato did remain “curiosities in a lab” for long periods of time — it took entrepreneurs looking to solve specific problems to drive their commercialization, and this commercialization process entailed a series of process innovations that made these technologies far more valuable than they otherwise might have been. 

And the government-financed U.S. industrial policy Mazzucato invokes was, as Stephen S. Cohen Brad DeLong suggest in The End of Influence, an artifact of the swollen defense budgets of the Cold War era.

On its own turf—competing with other militaries, especially the Soviets—DoD was unbeatable, but it was not aiming at producing products and firms that were well adapted to competition in civilian markets vigorously contested by the agile. For rockets, satellites, jet engines, aircraft, mainframes, and supercomputing (the latter two now quaint terms), this was not a problem. It became a problem, a serious one, in electronics. By the 1980s, spin-off was losing the race to “spin-on” (i.e., sourcing from the commercial sector) in electronics, the critical defense technology. The commercial sector innovated, embodied innovation, and made it reliable much faster than did the defense sector; commercial entities produced at vastly greater volumes and at far lower prices. DoD increasingly had to source vital components of military systems—semiconductors, lasers, flat panel displays, optical storage, etc.—from the civilian economy. Increasingly, this meant relying on Japanese, not American, mass producers.

In 1986, DoD went public in its Defense Sciences Board report, “The Use of Commercial Components in Military Equipment.” The document revealed that commercial electronics such as computers, radios, and displays were just as durable, even in harsh environments, one to three times more advanced, two to ten times cheaper, five times faster to acquire, and more reliable than their military equivalents. For the foreseeable future, the report concluded, “commercial-to-military ‘spin-ons’ are likely to boom while military-to-commercial ‘spin-offs’ decline.” Even beyond electronics, DoD’s spin-off industrial policy was beginning to show its structural defects or, more precisely, the commercial ironies of its successes. In the 1950s and 1960s, DoD had sponsored the creation of very advanced, numerically controlled machine tools for use in aircraft production. And it got them. But, in doing so, it had also shaped the American machine tool industry. By the 1980s, the defects of this unrivaled excellence were becoming apparent in the steep decline of that industry in the face of foreign competition. Do D-inspired technology was proving to be too expensive and far too complicated to operate under normal industrial conditions than the simpler, cheaper machines that came out of Japan’s industrial policy, which focused precisely on tools for ordinary industrial applications, or German high-end, factory-friendly machine tools.

Like an intelligent military recognizing the limits of its forces, DoD has continued to push ahead on the spin-on, going so far as to establish its own venture firm in Silicon Valley to monitor, access, and assist interesting new technologies in start-up companies. Truly new technologies such as computing, biotechnology, or even, long ago, electric motors take considerable time to move, at scale, from laboratory to market; twenty years is rather a norm. It is possible that spin-offs from relatively recent Do D projects will find important, driving roles in the American economy in the near future. But the contrast between the commercial successes of spin-offs from DoD projects of the 1950s and 1960s and those of the 1980s and 1990s brings American assertions that we don’t do industrial policy a lot closer to God’s honest truth.

This transition from “spin-off” to “spin-on” doesn’t demonstrate that government doesn’t have an important role to play in financing basic research, particularly basic research with potentially useful military applications. It does, however, suggest that much has changed since the the days of DARPAnet, and that the lessons of that era might no longer apply. 

Eisenbrey’s reference to Douglas Englebart reminded me of Ashwin Parameswaran’s recent discussion of why Englebart’s approach to computing failed to take hold. As Ashwin explains, Englebart believed that computing should aim to “augment human intelligence”:

Engelbart was dismissive about the need for computing systems to be easy-to-use. And ease-of-use is everything in the mass market. Most people do not want to improve their skills at executing a task. They want to minimise the skill required to execute a task. The average photographer would rather buy an easy-to-use camera than teach himself how to use a professional camera. And there’s nothing wrong with this trend.

But why would this argument hold for professional computing? Surely a professional barista would be incentivised to become an expert even if it meant having to master a difficult skill and operate a complex coffee machine? Engelbart’s dismissal of the need for computing systems to be easy-to-use was not irrational. As Stanislav Datskovskiy argues, Engelbart’s primary concern was that the computing system should reward learning. And Engelbart knew that systems that were easy to use the first time around did not reward learning in the long run. There is no meaningful way in which anyone can be an expert user of most easy-to-use mass computing systems. And surely professional users need to be experts within their domain?

The somewhat surprising answer is: No, they do not. From an economic perspective, it is not worthwhile to maximise the skill of the human user of the system. What matters and needs to be optimised is total system performance. [Emphasis added]

The extraordinary economic benefits created by computing flow from the ways in which they allow unremarkable people to accomplish what had once been enormously difficult, cognitively-demanding tasks. Englebart developed the mouse, yet it took a highly decentralized process to yield user-friendly technologies that actually put the mouse to good use. In a similar vein, the steam engine had been devised in ancient Rome, but it failed to spark an Industrial Revolution in the ancient Mediterranean. 

Basically, I think it’s fair to say that government has a role to play in financing basic research. But it is important to recognize that U.S. firms and workers won’t necessarily capture the lion’s share of the benefit, nor should we expect them to do so, as higher-level knowledge tends to travel well. If our goal is to make U.S. workers and firms more successful, we really ought to focus on the “absorptive capacity” of the U.S. economy, e.g., raising the skill level of the workforce, facilitating firm entry, which will in turn facilitate process and product innovation, etc. 

Reihan Salam is president of the Manhattan Institute and a contributing editor of National Review.
Exit mobile version