I was 16 the first time I walked through the front door of Bell Labs. I had joined the Computer Explorers in the hopes of meeting young ladies who shared my passions for Dungeons & Dragons, ancient Greek philosophy, and mathematical physics. It was run by volunteers from the Labs in pocket protectors and glasses who forced us to program using actual punch cards before allowing us to proceed to FORTRAN. Even back then, this seemed ridiculous. I spent many nights in a windowless room mastering this arcane, pointless activity. Also, there were no girls at all.
The payoff came when I was programming in FORTRAN and facing a very complex logical problem that had me stymied. My mentor explained how to think about the problem in terms of the physical manipulation of punch cards, and suddenly the answer was obvious. It was the nerd equivalent of Mr. Myagi in The Karate Kid yelling “wax on!”
A couple of years later, I attended a course session at MIT in a large lecture hall. The instructor began by describing a simple engineering problem, and then gradually adding more and more complexity. The sketch he drew on the blackboard expanded bit by bit, until we realized he was outlining the U.S. national telephony network. Conversations stopped and attention became intense as we began to understand some of the challenges that had been overcome. The most mind-boggling aspect was that it was a system composed of a nearly infinite number of parts and sub-systems, all of which had to work together seamlessly without ever failing catastrophically. It was a staggering achievement, and is still arguably one of the greatest single engineering systems ever deployed.
It was natural for me to go to work at AT&T’s Labs when I left grad school (I worked in an entity called Information Systems Laboratories — the structure became very complicated during the breakup of AT&T). I worked there developing pattern recognition software, and shared a house with several of the British PhDs who had moved to New Jersey to form a good part of the research team in the optoelectronics lab. The work was interesting, and life was fun. I took my first semester of Japanese at Bell Labs, and saw my first Wim Wenders movie there as part of a film club.
Michael Mandel points to a long New York Times piece in which Jon Gertner argues that, though “there’s no single best way to innovate,” Bell Labs represents a model for “true” innovation:
But what should our pursuit of innovation actually accomplish? By one definition, innovation is an important new product or process, deployed on a large scale and having a significant impact on society and the economy, that can do a job (as Mr. Kelly once put it) “better, or cheaper, or both.” Regrettably, we now use the term to describe almost anything. It can describe a smartphone app or a social media tool; or it can describe the transistor or the blueprint for a cellphone system. The differences are immense. One type of innovation creates a handful of jobs and modest revenues; another, the type Mr. Kelly and his colleagues at Bell Labs repeatedly sought, creates millions of jobs and a long-lasting platform for society’s wealth and well-being.
The conflation of these different kinds of innovations seems to be leading us toward a belief that small groups of profit-seeking entrepreneurs turning out innovative consumer products are as effective as our innovative forebears. History does not support this belief. The teams at Bell Labs that invented the laser, transistor and solar cell were not seeking profits. They were seeking understanding. Yet in the process they created not only new products but entirely new — and lucrative — industries.
I hope the first few paragraphs of this post make it clear that I have great respect and affection for the place, but here’s the thing: By the time I went to work at the Labs, very few people that I knew there worked very hard. People were doing things like taking Japanese, going to movies, and leaving work punctually. And those who were working hard thought of the place primarily as a stepping-stone to something else. Consider Gertner’s list of seminal innovations that emerged from AT&T. The laser was invented in 1958, the transistor in 1947, and the solar cell in 1954. What have you done for me lately?
This slacking was mostly because of incentives. It was hard to distinguish yourself, or get rich. This had been true in the Golden Age of Bell Labs,but by the 1980s, Silicon Valley, math-intensive finance and similar ecosystems were exploding. Talented, ambitious young people could go to these places, and make both a huge individual impact and a ton of money. The Labs still had a lot of smart people, but you can imagine the selection-bias problems in recruiting and retention once these alternatives were available. I left after less than two years, and went to what Gertner sees as (kind of) the Dark Side.
Ultimately, I think the incentive structure at places like Bell Labs is driven by the underlying technology. A telecommunications network is an integrated system. You don’t want a handful of young entrepreneurs to just let their freak flags fly. For one thing, it can crash the whole network. The same is true for a lot of 20th-century technologies.
Sure, when you have a monopoly on an incredible cash-generating business, and part of the implicit deal with the government is that you will fund a lot of basic research, and there are not many competing career alternatives, then you can house groups that come up with really great ideas. But this has a lot less to do with the structure of Bell Labs or with telecommunications platforms, than it does with being a back-door way to fund basic research that could in theory be done at any university or government lab.
Distributed technologies are different. They reward experimentation and trial-and-error learning, and the returns to scale in innovation are much less pronounced. I am confident that the Web search engine (e.g., Google) satisfies Gertner’s definition of real innovation as “an important new product or process, deployed on a large scale and having a significant impact on society and the economy, that can do a job better, or cheaper, or both.” The core of the Google search engine (the PageRank algorithm) was invented by two grad students at Stanford, who then founded the company. This structure is not ancillary to the underlying technology business; very loosely coupled technology is the best way to innovate in such an environment. And the winners in such a competition can personally capture more of the value that they create than in an integrated system because the technology itself is modularized. This, not some new human genome, is primarily why the Internet economy has such a distributed model of innovation.
Of course, most of these distributed technologies rely on a foundation of integrated systems — inventors of new search engines rely on big buildings full of servers and fiber-optic cables running through tunnels. But at some point the infrastructure technology becomes mature plumbing, and returns to innovation are higher on the distributed systems that sit on top of them.
I share Gertner’s concern that lots of contemporary technologies may not create the opportunity for as many middle-class livelihoods as some prior technologies. This is closely related to what Tyler Cowen means by “the Great Stagnation.” But if Gertner wants to see more scale-innovation systems like Bell Labs, it will require the invention of a new generation of technologies that demand that structure. A “build it and they will come” strategy in which we recreate an innovation infrastructure that worked very well for telecommunications networks from 1940–1970, and then expect spectacular results, seems to me to confuse means with ends. Basic scientific breakthroughs can sometimes require super-colliders, and can sometimes require Richard Feynman plus a piece of chalk. And technological innovation to exploit either kind of breakthrough can sometimes require Bell Labs, and can sometimes require two grad students in a dorm room.