The Corner

Against the AI Doomerism Consensus

(nensuria/Getty Images)

I see overhyping everywhere in the AI debate.

Sign in here to read more.

In his thought-provoking piece on the societal risks posed by artificial intelligence, Phil Klein jumped off the mega-viral essay published on social media this week by AI entrepreneur and investor Matt Shumer. In it, Shumer contributes to the emerging consensus that thinking machines will — not could — give way to catastrophic economic disruption and displacement. But Phil might have cited any number of similar manifestos. After all, that sort of talk is everywhere. And it seems to be coming primarily from AI investors like Shumer and the reporters who launder the industry’s talking points into the discourse.


Shumer’s essay, “Something Big Is Happening,” predicts that millions of Americans will not just be unemployed but unemployable within this decade. AI will soon start building itself out at an incomprehensible pace, ensuring that “almost all knowledge work” will be outsourced to robots. Legal work, software engineering, financial and medical analysis, customer-facing jobs like consumer services, and even (gasp!) professional writing will be the exclusive province of digital entities. You, too, will find your head on the chopping block . . . that is, unless you become an early adopter of AI technology. And for the low, low price of just $20 per month, you can protect yourself and your family from your otherwise inevitable obsolescence. But who could put a price on peace of mind, right?

For some reason, the sales pitch at the end of this thing raised no alarm bells among the dozens and dozens of influential elites who found the essay terrifyingly profound. The appeal was reminiscent of how AI developers have been crying out for federal regulation because, in the absence of government intervention, they might unwittingly scuttle civilization itself (it’s definitely not because the industry wants to erect barriers to entry into this still unencumbered sector).




Of course, Phil’s trepidation is prudent. All of us should strive for the epistemological humility to which the AI industry’s titans are apparently allergic. None of us knows how this will unfold, and the worst-case scenarios are worth considering. Phil’s scenario is a dire one: revolution, in the literal sense.

We’re not talking about revolution as a metaphor for a radical paradigmatic or ideational shift, or even dramatic reforms to the existing social contract. He means the sloughing off of the existing social order through a mass insurgency from below, albeit one led by a radicalized vanguard of elites. This is not a color revolution or the revolts that typified the Arab Spring. Phil envisions something like the French or Russian revolutions in which “the leaders of those revolutions were well-off or products of elite education.”


I share Phil’s apprehension over the casual radicalism that defines elite discourse, the rising tide of quasi-revolutionary violence among a small but effective cohort of aspiring insurgents, and the disregard institutional stewards have for the organizations in their care. “Throw into the mix a historic technological upheaval that is laser-targeted at an educated and well-connected class,” he writes, “and I fear that if the apocalyptic scenario plays out, it will be more destabilizing to our politics than anything we have previously experienced.”

Phil’s right about the historical dynamic in which the masses are led to insurrection by an elite, overeducated class of hyper-ideological wastrels. But what’s missing from Phil’s equation are the displaced and, importantly, dispossessed proletarians who form the ranks behind the vanguard.


The doomerism around AI anticipates mass displacement among white-collar workers, the majority of whom are stakeholders in society in ways that extend beyond their employment. They are likely to have access to denser social networks and to be influential constituents — influential enough to have their concerns heard and addressed by the political establishment. Most predictions anticipate that AI will be a top-down disruption rather than a bottom-up phenomenon. We’ve seen what an economic calamity that hits the more financially secure first looks like. The most radical reform to emerge from that epochal financial disaster was . . . the Consumer Financial Protection Bureau.

That’s a reform I don’t especially like. I think it represented a radical revision to the social compact and an affront to the Constitution. But no one would call that a revolution.


There is almost no room in the discourse for undesirable outcomes that fall short of catastrophism. After all, modesty and prudence do not go viral. But your second clue that something is off here should be in the degree to which the doomsayers seem incapable of contemplating the ways in which AI can serve as savior as well as destroyer.

We talk only about the jobs that will be lost to AI, not those that will be created by it. The World Economic Forum’s Future and Jobs Report 2025 estimates that around 170 million jobs will be created globally by the end of the decade, roughly double the number of jobs lost to AI, automation, and other technological factors. That’s a ton of economic displacement (albeit, perhaps a high-end estimate). But rather than wiping out whole sectors, it is just as possible that the workers displaced by AI will be retained in the sectors in which they’re already employed.

It defies logic to assume that an industry that grows as rapidly as AI is predicted to will not need human data scientists, research analysts, specialized engineers, and, yes, even support and administrative staff. In addition, sectors such as health care, agriculture, and emerging industries will require as much, or even more, human talent than they currently employ. Beyond that, the market will surely support fields that reject AI — or, at least, seek to tame it. Ethicists, safety specialists, and human-oriented alternatives to the digital world won’t be niche sectors, either. I’m not convinced that consumers will trade in their trusted financial advisers, general medical practitioners, and other human faces because, like robots themselves, they desire nothing more from their service providers than resource allocation strategies and best practices. People will still matter.


Just as irritating is the habit in which participants in this debate default to the assumption that the only solution to AI’s disaggregating potential, whatever its scale, is big government. It seems like the only idea the technocratic set has is to furnish the public with a taxpayer-funded universal basic income, providing their otherwise unproductive lives with a modest level of comfort.




There are already federal programs that provide displaced workers with access to job training, income support, and allocation allowances. There will certainly be calls for a more expansive welfare state. There always are. But it is also possible that the AI-dominated future will also be one defined by radical economic mobility — one in which workers can do their jobs from anywhere, move from job to job frequently, and set their own terms of employment. That could light a fire under less revolutionary reforms, some of which might even be market-oriented. Among them, portable health-care plans, loosened professional licensing requirements, and even tax code reforms that incentivize the retention of human capital.

Lastly, the AI revolution is predicated on the notion that everyone and every firm will adopt this new technology at a lightning pace. If so, it would be the first revolutionary technological advance to do so without experiencing a crash first. You name it: cars, planes, the internet, video games, even electrification — none of those advances followed a straight trajectory to all but universal adoption. And the demand for AI has limits, just as these tools did and do. “As wages rise economy-wide, labor-intensive sectors with weak productivity growth claim a larger share of income,” the techno-optimist James Pethokoukis cautioned. “The result: Even spectacular AI gains may yield only moderate growth in overall productivity.”


I share Pethokoukis’s optimism, in part, because I see overhyping everywhere in the AI debate. When our fate is either a Bradburian future typified by federally subsidized lethargy or a Butlerian Jihad, we can safely assume that both the downside risks and upside prospects are overblown. To me, in the next five to ten years at least, something in the middle seems more likely.

Exit mobile version