The Corner

How to Worry about Artificial Intelligence

(ismagilov/iStock/Getty Images Plus)

AI is a very big deal, and it will cause us to take some exciting turns and to make some messy mistakes. But we can handle it.

Sign in here to read more.

In today’s Wall Street Journal, I’ve got a short piece about how to approach the question of regulating artificial intelligence.

One question it has drawn, both today and when I’ve made this basic case to various people lately, is whether the kind of modest approach I propose might be inadequate to a technology that, after all, risks destroying the world and killing us all. It’s an odd question, I suppose, but this is an odd subject, and this question offers a good excuse for putting a few cards on the table about how I have thought about the dawning artificial-intelligence revolution so far.

There is certainly an amazing amount of delirious catastrophism about AI and its implications in the public debate at the moment. Some of the most prominent leaders in the field openly wax hysterical about unleashing a world-ending technology. Some of this is salesmanship, oddly enough, and some is likely a desire to wield regulation to restrain future competitors. But some also looks to be a function of their being genuinely overwhelmed by how different today’s AI appears to be from traditional software.

A technology that responds to the same prompt differently every time you pose it and takes analytical steps you didn’t expect is bound to terrify engineers. Overcoming that dread will require the technical experts to exercise some humility: They will need to grasp that public policy deals with terrifying uncertainties all the time, and that sometimes policy-makers might actually be better equipped than engineers to figure out what to worry about and how much.

I suppose it’s imaginable that AI will quickly get out of human control and become an evil world-destroying menace. If that happens, I doubt aggressive regulatory approaches or the creation of a specialized agency in the EU would be of much use to us. But whether that is likely to happen, or more generally just what we should expect and prepare for with regard to AI, is a question that ought to be approached in a way that is reasonably rooted in the character of the technology as it is taking shape.

This hasn’t been easy in the AI policy debates so far. The experts are too deep in the weeds of both technical and regulatory particulars, while policy-makers are throwing around general principles. There is value to both, but to make them useful to each other and allow a constructive policy conversation to start, it will be essential to consider artificial intelligence from a kind of middle distance: to see its technical particulars in light of its general character and understand its broadest implications in light of what it actually consists of.

In the WSJ, I offer this very general sense of what the technology we now describe as AI tends to involve:

In essence it’s an analytical technology that learns complex patterns from training data and then draws on those patterns to make predictions about new data. With modest computing power, that looks like a guess at the next word in a text message. But with access to gargantuan amounts of data and vast computing capacity, it can look like a clean-prose response to a complex prompt approximating what a human would write. It can do the same with graphics, video and sound.

The potential of such technology is immense. Drawing on deep wells of existing information to produce a plausible next increment in response to a prompt is what a lot of intelligent human action looks like. Not only the knowledge but even the judgment of experts in many fields is a product of extended exposure to complex patterns of information. They develop a knack for knowing what should or will come next. AI can develop and apply a similar knack on a larger scale and far faster than most human experts, and that scale and speed will only grow.

But although the potential of this technology is enormous, it is nonetheless constrained. AI understood in these terms has particular capacities, strengths, and weaknesses, and these should help define both the public’s sense of what it portends and the approach that regulators take toward it. Not every policy question can be answered at this middle level between abstraction and the fine details, but the public debates should begin there.

One implication of seeing AI from this middle distance has to do with its creative potential, and its limits. Generative AI is inherently innovative: It traces a pattern discerned in a body of information and then extends it beyond that information. Yet it would be a mistake to confuse such incremental innovation with disruptive or destructive creativity. AI creativity (like some kinds of human creativity) is precisely not disruptive but continuous with the existing patterns of our experience. That’s how it works.

Such pattern extension may open truly new vistas in arenas where human beings are not fully cognizant of complex underlying patterns, particularly in some of the natural sciences. In fields like biochemistry, meteorology, and other arenas where researchers struggle to wrap their minds and tools around immense natural complexity, AI has the potential for truly radical breakthroughs.

But in the realm of man-made patterns — in the social sciences, arts, humanities, culture, and most of our everyday experience, such AI could be at least as much a force for continuity, conformity, and conventionality. It may be more of a tradition machine than a breakthrough engine. That doesn’t mean it won’t produce anything new. Traditionalism can be highly generative. It just means it will produce new things in the patterns of existing ones.

A tradition machine can be a great force for good, facilitating further applications of the best that has been thought and said. It can also create space for more disruptive creativity by some human beings. Subcontracting some generative traditionalism to technology could channel more human creativity toward seeking new directions in various fields — as the emergence of photography led to more radical creativity in the visual arts.

But such traditionalist AI could have its dark sides, too. The risks it poses may have less to do with doomsday and disruption than with rigidity or conformity. We have begun to see this with concerns about bias in Large Language Models. It turns out that ChatGPT answers political and cultural questions with the conventional wisdom of the elite culture that produced the data upon which it has been trained. The so-called hallucinations of some LLMs — concocting sources that don’t exist — are also ways of filling gaps in the record to facilitate continuity, like the dishonest high-school debater who fabricates the perfect George Washington quote to make his point. This is a classic traditionalist temptation.

The problem could run much deeper than that, though. If AI responds to our prompts by effectively averaging the internet under the guidance of its programmers, then its growing pervasiveness will tend to make our culture more like it already is — and especially more like our very online elite. At least in its more mundane popular uses, it may be the perfect generative technology for an age of decadence, just as the internet has turned out to be the perfect communication technology for an age of solipsism.

In fact, the example of the early internet should help both sharpen and limit our worries on this front. When the web was first becoming pervasive, in the 1990s, many observers assumed it would send our society in unimagined new directions. Since the ’90s were a happier time, those fantasies tended toward the utopian, rather than the sorts of hellish apocalyptic nightmares that come more naturally to us these days. But they were no less fantastical. When I was a graduate student at the turn of the millennium, the political-science literature overflowed with guileless enthusiasm about how the internet would bring an era of “microdemocracy” and accountable government. That wasn’t what happened, perhaps mostly because it wasn’t what people wanted.

The internet has let our society more closely resemble our desires, which has been good and bad in the ways that our desires are. Artificial intelligence may similarly tend to let us become more like we already are and want to be. To worry well about that problem would mean worrying about our desires — which is a very daunting challenge but hardly a novel one.

It’s never a bad idea to ask whether your new project might destroy the world. But to actually prepare for the potential and the risks of AI, policy-makers might want to ask some more mundane questions, too.

This is part of why I suggest that the regulation of AI should proceed at least at first through the existing apparatus of American government. But it is also a reason more generally to think about both the potential benefits and the potential risks of AI in the terms we already have at our disposal for thinking about benefits and risks. We will adapt those as we go so they can help us contend with new kinds of questions, too. But the old familiar questions are likely to always matter most.

So let’s calm down, strap in, and make the most of what this new technology can offer us while keeping our eyes open to ways in which it might magnify our vices and make us more like our worst selves. AI is a very big deal, and it will cause us to take some exciting turns and to make some messy mistakes. But we can handle it.

Yuval Levin is the director of social, cultural, and constitutional studies at the American Enterprise Institute and the editor of National Affairs.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version