Americans Need Not Fear Artificial Intelligence

(Guillaume/iStock/Getty Images)

AI is not a machine that ‘thinks for itself’; it’s a tremendously promising tool for humans to use.

Sign in here to read more.

AI is not a machine that ‘thinks for itself’; it’s a tremendously promising tool for humans to use.

W e can’t escape artificial intelligence (AI), not because the Terminator is hunting us down but because the media can’t stop talking about it. Whether it is ChatGPT, self-driving cars, or Elon Musk’s humanoid robots, the AI conversation sometimes seems to be everywhere. The misconception that AI can think for itself is almost as pervasive, which adds to the public’s anxiety about being replaced.

Hysteria around so-called thinking machines is rampant. Monmouth polling reveals that nearly three in four Americans believe devices with the “ability to think for themselves” would hurt jobs and the economy. A key driver of the fear is the belief that machines can think for themselves. In fact, a majority of Americans believe AI is either already more intelligent than humans or is on its way to being so. These pervasive concerns come from a misunderstanding about how AI functions.

Computer scientist Jaron Lanier argues that our entire understanding of AI is incorrect, starting with the acronym “AI” or “artificial intelligence.” AI, such as ChatGPT, a large language model (LLM) that creates conversational responses to text prompts, does not think independently but instead copies patterns. Humans also possess pattern recognition, but we aren’t limited to that. ChatGPT and other AI-type models are. Intelligence isn’t only the ability to predict the next step in a sequence but entails the ability to reason through abstraction, as François Chollet surmised in his paper on evaluating AI for general intelligence.

An article by David Goldman for Law & Liberty illustrates the cognitive barriers faced by ChatGPT and other LLMs when they try to understand more complex concepts. Goldman does this by testing ChatGPT’s ability to understand self-referencing, which implies a certain level of self-awareness that is harder for AI to simulate. When he asked the chatbot to explain multiple self-referencing statements and jokes, it failed, largely resorting to rearranged answers it found online.

As Goldman explains:

“Weak” AI—the sorting and categorization of objects by computers—works perfectly well. Computers can distinguish faces, or bad parts from good parts on a conveyor belt, or photographs of cats and dogs once they have “learned” to differentiate the arrangement of pixels—provided that they first are trained by a human operator who marks the learning set as “cat” or “dog.” On the other hand, so-called “strong AI”—the replacement of the critical functions of the human mind by a computer—is a utopian delusion.

Though anxiety over technological change isn’t anything new, our novel and erroneous characterization of AI as “thinking for itself” has given a rational cover for these otherwise irrational fears. Holding off technological progress to preserve jobs is unrealistic for many reasons. As a society, we learned this lesson back in the early 19th century, when a band of skilled textile workers fearing obsolescence by new, more automated mills went on a rampage of technological destruction known as the Luddite rebellion after Ned Ludd, its mythical leader.

Fundamental misunderstandings magnify the fears regarding AI. If AI was conceptualized as a tool instead of an entity, then these concerns could be more easily identified as a manifestation of Luddism. However, because of over a century of movies and books anthropomorphizing machines into thinking creatures, new advancements in AI are embedded in our psyche as something fundamentally different and thus infinitely more threatening. While people would laugh at an activist proposing a ban on email to save the paper industry, members of the tech elite found considerable support when they proposed a “pause” on LLM development.

This misconception about AI could risk holding back future advancements, hurting the consumer who benefits from higher job efficiency and improvements to America’s place in the world technologically, economically, and strategically. The first step to protecting the progress AI could help deliver is to re-characterize how we view AI, from a machine that “thinks for itself” to yet another tool for humanity to use.

Isaac Schick is a policy analyst at the American Consumer Institute, a nonprofit education and research organization.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version