There was a time in America, not too long ago, when most people, including journalists, business leaders, politicians, and scholars, were full-throated advocates of technologically powered productivity growth. They understood that through mechanization, automation, and other forms of innovation, we can produce more, better, and cheaper goods and services, and have higher incomes. It was understood that some workers might lose their jobs after we figured out how to do them more efficiently, but most Americans believed, to quote Star Trek’s Mr. Spock, that “the needs of the many outweigh the needs of the few.”
Those days are gone, though. Current opinion now routinely echoes the mythical 19th-century machine destroyer Ned Ludd, warning in a growing avalanche of books, academic theses, market forecasts, and op-eds that technology is leading us to a world of mass unemployment, that it is creating a newly idle lumpenproletariat, and that we had better put in place a universal basic income (UBI), under which the state cuts a check to everyone, regardless of their income or work status, if we are to have any hope of avoiding mass unrest.
This kind of worry, verging on “robophobia,” represents a remarkable reversal from a long period in American history — stretching from the 1890s to the early 1970s — when most Americans sang the praises of technology as an engine of progress that not only raised our living standards but also made America great. Exultantly titled books such as Triumphs and Wonders of the 19th Century, The Marvels of Modern Mechanism, Our Wonderful Progress, and Modern Wonder Workers were common. When Henry Adams viewed the huge dynamo for producing electricity at the 1900 Great Exhibition in Paris, he wrote (in the third person) of his reaction:
As he grew accustomed to the great gallery of machines, he began to feel the forty-foot dynamos as a moral force, much as the early Christians felt the Cross. The planet itself seemed less impressive, in its old-fashioned, deliberate, annual or daily revolution, than this huge wheel, revolving within arm’s length at some vertiginous speed, and barely murmuring.
Harvard economist Benjamin Anderson spoke for many when he wrote 40 years later that “on no account, must we retard or interfere with the most rapid utilization of new inventions.” And it wasn’t just defenders of capitalism who saw technology as a progressive force. Socialists did too, as when Jack London praised automation, proclaiming, “Let us not destroy these wonderful machines that produce efficiently and cheaply. Let us control them. Let us profit by their efficiency and cheapness. Let us run them by ourselves. That, gentlemen, is socialism.”
These days, Harvard economists are as likely as not to worry that automation is hurting too many people. Larry Summers wrote in the Financial Times that “it is widely feared that half the jobs in the economy might be eliminated by innovations such as self-driving vehicles, automatic checkout machines and expert systems that trade securities more effectively than humans can.” Summers, a macroeconomist who has in the past expressed faith in the Fed’s ability to achieve near-full employment, now believes that one-third of men between the ages of 25 and 54 could be unemployed because of technology by midcentury.
Such voices have been growing louder in recent decades. Artificial-intelligence scientist Nils Nilsson was in the advance guard when he warned in 1984 that “we must convince our leaders that they should give up the notion of ‘full employment.’ . . . The pace of technological change is accelerating.” But what’s different today is that such thinking has become a common, widely repeated narrative, greatly amplified by a supercharged media landscape and a packed calendar of “thought leader” events. You cannot attend Davos, a G20 summit, or a TED talk without being told that the pace of technological change is accelerating and the days of “work” as we know it are numbered.
Klaus Schwab, head of the World Economic Forum, predicts that robotics and artificial intelligence will destroy 5 million jobs by 2020. Paul Krugman warns that “highly educated workers are as likely as less educated workers to find themselves displaced.” The Economist, a publication that once served as a voice of restraint, says “brain work may be going the way of manual work.” And if that’s not bad enough, Martin Ford, author of the New York Times best-seller The Rise of the Robots, warns of 75 percent unemployment by 2100. But why settle for 75 percent? Silicon Valley gadfly Vivek Wadhwa tells us that 80 to 90 percent of jobs will be eliminated in just the next ten to 15 years.
The job-destroying technological leviathan is so unrelenting and all-powerful that even sex workers are being warned that they could find themselves on the UBI dole as robots outperform the most seductive prostitutes. As computer-science professor Moshe Vardi writes, “Are you going to bet against sex robots? I’m not.”
Most of these predictions cite one of three studies in particular: one by Oxford researchers Michael A. Osborne and Carl Benedikt Frey, who warn that 47 percent of U.S. jobs could be eliminated in 20 years; another by the McKinsey Global Institute, which contains the often-repeated assertion that 45 percent of jobs will be automated; and a third by PricewaterhouseCoopers (PwC), which predicts that 38 percent of U.S. jobs could potentially be eliminated by 2030.
Let’s begin by recognizing that much of what sounds scary may actually be quite minor. For instance, Klaus Schwab finds that robots and AI are poised to eliminate 5 million jobs in 15 major developed and emerging economies by 2020 — which amounts to just 1.25 percent of the total jobs in those economies.
And what about the Oxford and McKinsey numbers? Surely they should put the fear of robots in us, right? Well, the McKinsey study actually says that less than 5 percent of jobs can be fully automated. The 45 percent figure comes from the share of employee time that technology could save. For example, IBM’s Watson cognitive-computing system could help doctors make faster and better medical diagnoses, and 20 percent of a typical CEO’s time could be replaced by technology such as artificial intelligence. McKinsey doesn’t think doctors and CEOs will be put out of work by technology, only that the nature of their work will change, and they will be able to use their time to do more interesting work and more productive things.
As for the Oxford study? Well, it is just plain wrong. The authors of this non-peer-reviewed study didn’t actually examine all 702 different occupations in their survey and assess in each case how likely technology was to substitute for a human worker. Instead, they took a shortcut: They relied on task measures from the U.S. Department of Labor that assessed occupations based on various factors, such as how much manual dexterity and social perceptiveness an occupation requires. If their calculated risk index was above a certain arbitrary figure, the job was assumed to be headed for elimination.
Many pundits overestimate the impact that technology will have on work.
The only problem is that their methodology produces nonsense. How exactly are robots going to send fashion models, manicurists and pedicurists, carpet installers, and barbers the way of the buggy-whip maker, as they suggest? Versace is not going dress up a sexy robot in a $3,000 dress and parade “her” down the runway, nor will we inhabit a Jetsons world where you sit down in the robot chair and get your hair cut automatically. When the Information Technology and Innovation Foundation, of which I am president, analyzed these 702 occupations manually, using a very generous assumption about how technology could eliminate jobs, we estimated that at most about 10 percent of jobs were at risk of automation. Instead of fretting about technology killing jobs, we should be worrying about how we are going to raise productivity-growth rates, which have been at all-time lows over the last decade.
As for the PwC study, it relies in part on the same flawed Oxford methodology, and its prediction is based on the assumption that the price of “robots” will fall significantly and their functionality will improve dramatically. Both of these are uncertain bets, to say the least.
One reason why so many pundits overestimate the impact that technology will have on work is probably that few are really familiar with what goes on in many occupations. They think: How hard can it be for a robot to install carpets? As someone who has actually installed carpets, I can tell you that the movements and subtle adjustments that are involved amount to a prohibitively hard math problem, not a task that even the most expensive advanced robot could be designed to do with any reliability.
This comfortable distance from the work that many other people do informs another aspect of their thinking. For most elites, work is interesting and satisfying, not to mention well paid. But for most Americans, work is hard, tiring, often boring, and sometimes dangerous. Take truck driving, the occupation many pundits are desperate to protect from the menace of self-driving trucks. Sure, some drivers enjoy the work, including being their own boss out on the open road. But being a long-haul trucker is not a great job by any means. According to the Bureau of Labor Statistics, truck drivers had a workplace fatal-injury rate seven times as high as the overall workplace average, and a non-fatal-injury rate three times as high. And given their extended periods on the road, most long-haul truckers spend way too much time away from their family and friends.
One study found: “Truck driving is, without a doubt, one of the most brutal jobs a person can do. Across the board, long-haul truckers have higher rates of obesity, diabetes, anxiety, depression, cardiovascular disease, divorce, and drug use than the average American.” According to the Centers for Disease Control and Prevention, truck driving is among the top five professions in the country for suicide rates. And if the stressful working conditions are not bad enough, the average annual income of $40,260 is 17 percent below the national median for all jobs. No wonder there is a truck-driver shortage.
Autonomous vehicles will save more than $1 trillion a year, much of that due to significantly reduced traffic accidents, and those savings will go directly into higher living standards for Americans.
Yet many people think that showing sympathy for blue-collar workers who may lose their jobs requires opposing, or at least questioning, the benefits of self-driving trucks. Andy Stern, former head of the Service Employees International Union, warns that “there will be disruption in different places. You can imagine people ringing state capitals with their trucks.” Former New York Times technology reporter John Markoff weighed the pros and cons of AI-enabled autonomous vehicles: “More than 34,000 people died in 2013 in the United States in automobile accidents, and 2.36 million were injured. Balance that against the 3.8 million people who earned a living by driving commercially in the United States in 2012.”
Balance? What’s there to balance? Autonomous vehicles will save more than $1 trillion a year, much of that due to significantly reduced traffic accidents, and those savings will go directly into higher living standards for Americans. That, plus the benefits from saving tens of thousands of lives yearly, is simply not comparable to the costs to truck drivers, most of whom would take a few months to find a different and probably equal or better job. This situation is likely to be quite different from the manufacturing-job loss over the last two decades resulting from trade and technology. Manufacturing wages were higher than average, so on average a displaced worker’s new job would be worse. Trucking wages are lower than average, so truck drivers could probably get a job that pays at least as much, particularly as the reduction in trucking costs frees up money that will be spent on other activities, creating other jobs. (For example, truck mechanics make 15 percent more than truckers.)
An additional reason for the focus on truckers and other workers who supposedly face the automation axe is that commentators want to divert blame for job loss away from globalization. For them it is an article of faith that technology, not trade, has decimated blue-collar manufacturing jobs. Never mind that foreign mercantilist trade practices, not computers, were responsible for more than half the manufacturing jobs lost in the 2000s. Likewise, these critics are loath to concede that the big spike in low-skilled immigration over the last two decades could have something to do with the contemporaneous stagnation in business investment in new machines. As Maya Kosoff wrote in Vanity Fair, “Robots, not foreigners, are the top long-term threat to employment and wages.” Never mind that when labor is dirt cheap, installing automation, such as machines to harvest crops, makes much less economic sense.
Another reason for growing robophobia is that many believe today’s rate of technological progress is unprecedented. With authors tripping over themselves to devise titles conveying the biggest transformation, we have books such as The Singularity Is Near, The Second Machine Age, The Third Wave, The Fourth Industrial Revolution, and The Fifth Technology Revolution. There will even be Infinite Progress — a blissful state of harmony in which “the Internet and technology will end ignorance, disease, poverty, hunger, and war” — which, don’t get me wrong, sounds great, except that they won’t. For these dystopian utopians, it all comes down to exponential progress powered by “Moore’s law,” which predicts the doubling of computer power every two years. Enthusiasts breathlessly tell us we are only a few more of these doublings away from techno-transformation.
But the truth is that instead of accelerating, progress in computing speed is slowing, and the number of transistors one can buy for a dollar is actually falling. Like Woody Allen turning to Marshall McLuhan in Annie Hall, I can’t resist quoting Gordon Moore himself, who says his law “can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.”
But the robophobes are not deterred. Technology marches on, they say, driven by disruptive Silicon Valley nerds and greedy capitalists interested only in maximizing their profits, both insensitive to the pain and hardship they will be sowing along the way. Thankfully, robophobes have a plan to save us from this horrible, more productive and more prosperous future. The United Nations Committee on Trade and Development (UNCTAD) proposes to advance trade and development by calling for “a major tax on robots,” as does Bill Gates, who recently argued that “right now, if a human worker does, say, $50,000 worth of work in a factory, that income is taxed. . . . If a robot comes in to do the same thing, you’d think that we’d tax the robot at a similar level.” Even economists who should know better, such as Yale professor Robert Shiller, have fallen prey: Shiller argues that robots are like alcohol, something harmful that society should tax so that we will consume less of it. Economists have a term for such fuzzy thinking: they call it “the lump-of-labor fallacy”: the view that there is only a fixed amount of work to be done in the economy, and once a job is gone, no others are created.
But when companies started using typewriters on a wide scale at the turn of the 20th century, which made secretaries more efficient, no one called on the feds to impose a typewriter tax to compensate for the lost taxes from unemployed secretaries. That’s because there were no unemployed secretaries. The higher productivity from those typewriters meant lower prices for the goods and services sold by companies using them, and consumers used the resulting savings to buy more goods and services, which created more new jobs to replace the secretarial positions that were no longer needed. That is why, over the last century, there has always been a negative, not positive, relationship between productivity growth and unemployment rates. In other words, higher productivity meant lower unemployment as it spurred more spending and more rational exuberance. The future will be no different, unless we smash the machines.
Shiller argues that robots are like alcohol, something harmful that society should tax so that we will consume less of it. Economists have a term for such fuzzy thinking: they call it “the lump-of-labor fallacy.”
UNCTAD, Gates, Shiller, and other worry-warts are so concerned with the well-being of workers that they forget the well-being of consumers, who can buy more goods and services when companies use technology to improve productivity. Yet for many people today, the last thing they would ever want is for a worker to lose his job. Saying that this might be in the service of American progress is somehow tantamount to saying you support randomized euthanasia. If we want to “save” jobs by stopping innovation, then why not get rid of technology altogether and bring back old jobs, as Washington Post business columnist Allan Sloan would have us do? Sloan actually urges Trump to pressure companies to scrap self-checkout kiosks in stores so we can bring back cashier jobs. He cites New Jersey as the perfect model, because it is one of two states where it is still illegal for consumers to pump their own gas. We all know what a great job pumping gas is, especially if you don’t mind the health hazards from breathing benzene fumes all day.
Sloan writes that if Trump used the bully pulpit to cow retailers into techno-submission, “we’d keep cashiers working instead of having to live in poverty or go on welfare or file for disability.” (Never mind that the median wage of cashiers is just 37 percent higher than the poverty level for a family of two, so they are not far from poverty to begin with.) Sloan goes on to note that this would be great because “we’d all win.” Really? Since when do we all win by lowering productivity and slowing GDP growth? Sloan and his fellow travelers call to mind the government officials Milton Friedman met while visiting a developing Asian country where a canal was being built in the 1960s. Friedman was surprised to see that instead of using tractors and earth movers, the workers had shovels. When he asked why there were so few machines, the government bureaucrat explained: “You don’t understand. This is a jobs program.” To which Friedman replied: “Oh, I thought you were trying to build a canal. If it’s jobs you want, then you should give these workers spoons, not shovels.”
To save workers from this modern-day bulldozer fate, many now call for Congress to establish a universal basic income. But this would lead to the very thing the robophobes warn us technology will bring: large-scale unemployment, with people’s spending diverted from job-creating consumption to the support of the permanently unemployed.
To be sure, the alternative should not be a call for the Hobbesian, dog-eat-dog world of the 1800s, where, if you lost your job, you were completely on your own. We can and should do a better job of providing temporary income support for workers who lose their jobs through no fault of their own (how about expanding unemployment insurance but having the benefits decline every week, so that workers have stronger incentives to get back into the work force?), and we should establish a better system of lifelong learning and retraining (how about tax-exempt “Lifelong Learning” accounts, akin to IRAs and 401K plans, to which both workers and employers could contribute?).
If the elites really want to help low-wage workers, they can start by once again becoming full-throated advocates of technology-led automation and productivity growth, coupled with stricter limits on low-skilled immigration and better labor-market-adjustment policies for workers displaced by productivity improvements. That, rather than robophobia, will help everyone get ahead.
– Robert D. Atkinson is the president of the Information Technology and Innovation Foundation. This piece appears in the April 17, 2017, issue of National Review.