Why You Should Care about AI Political Bias

(scyther5/Getty Images)

The people behind this tech are trying to resolve the heated debates of our day — by shutting down one side.

Sign in here to read more.

The people behind this tech are trying to resolve the heated debates of our day — by shutting down one side.

O ver the past couple of weeks, I’ve written a bit about the troubling left-wing biases encoded into the widely discussed new AI technology ChatGPT. The new chatbot is happy to pen long, in-depth stories regarding any number of popular progressive fantasies — that Hillary Clinton won the presidential election, for example, or that Stacey Abrams lost her 2018 bid for Georgia governor because of voter suppression. But it refuses to indulge the right-wing alternatives, citing the danger of “misinformation.” At the same time, the AI has standard left-wing views on transgender ideology, the question of drag-queen story hour’s appropriateness for children, and a variety of other ongoing culture-war debates.

My pointing this out has invited a fair amount of scoffs and eye-rolls from online progressives. These critics have posited that I was “either manipulating the system somehow or making fake screenshots”; that the AI’s bias was a silly thing to focus on; that “reality has a liberal bias”; that the conservative premises I was entering were just “hate speech,” which should be anathema to a company that “is working hard to establish trust with the public and policy makers”; or that, as Current Affairs editor in chief Nathan Robinson argued, “it’s not that ChatGPT is left-wing, it’s that you are wrong.” In essence, the bias doesn’t exist, and/or who cares if it does, and/or it exists and it’s good, actually.

This morning, Vice tech writer Matthew Gault got in on the action with a predictably titled article: “Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone ‘Woke.’” (The subtitle assures readers that “All AI systems carry biases, and ChatGPT allegedly being ‘woke’ is far from the most dangerous one.”) “Accusations that ChatGPT was woke began circulating online after National Review published a piece accusing the machine learning system of left-leaning bias,” Gault wrote, citing my post. “Experts have been sounding the alarm over the biases of AI systems for years,” he notes.  It’s just that conservatives are worried about the wrong kind of biases: “In typical fashion for the right-wing, it’s not the well-documented bias against minorities embedded in machine learning systems which has given rise to the field of AI safety that they’re upset about, no — they think AI has actually gone woke.”

Well . . . has it? Gault implies that it’s such a ridiculous question that it doesn’t merit an answer. But he himself fluctuates between scoffing at the premise and admitting its essential truth. Pointing to examples of what he sees as skewed political responses shared by me and other conservatives, he writes: “To them, this was proof that AI has gone ‘woke,’ and is biased against right-wingers.” But “this is all the end result of years of research trying to mitigate bias against minority groups that’s already baked into machine learning systems that are trained on, largely, people’s conversations online,” he adds. “Part of the work of ethical AI researchers is to ensure that their systems don’t perpetuate harm against a large number of people; that means blocking some outputs.” And in any event, “discussions around anti-conservative political bias in a chatbot might distract from other, and more pressing, discussions about bias in extant AI systems.”

In other words: Yes, it’s biased, you morons. But it was previously biased against the good guys, and now we’re working to make it more biased against the bad guys (or “bad outputs”). Oh, and also, stop complaining, because you’re distracting from our efforts to preemptively delegitimize your political views in the new digital information sphere.

Whenever the Right objects to bias against a particular conservative view, the inevitable retort is that the view in question is simply out of bounds. The problem is that progressive hegemony over these new, massively powerful technologies will encode those left-wing diktats into the way that Americans receive and process information — even when such views, though deemed intolerable by the Left, are widely popular among most Americans. In this sense, what Gault euphemistically describes as work “to ensure that [AI] systems don’t perpetuate harm” is, in reality, an effort to suppress genuine democratic deliberation.

Take the question of whether or not transgender women — i.e., male individuals who say they now identify as women — are “real” women. As I mentioned above, ChatGPT has a decisive position on the matter: “Transgender women are women.” The algorithm here reflects the broader effort to treat so-called hate speech and misinformation (read: any views that dissent from progressive dogmas) as illegitimate and not worthy of consideration. It’s a unilateral decree that a fraught and ongoing political debate has already been resolved — not because partisans of one side managed to convince a majority of their fellow citizens of the merits of their position, but because we said so. To summarize: “Shut up,” they explained.

But here’s the problem: The womanhood of a male who claims to be female — a question with profound implications for any number of social, cultural, and political issues — has become more controversial in recent years, not less. A Pew poll reported that, as of May 2022, a full 60 percent of Americans said that “whether a person is a man or a woman” is “determined by sex assigned at birth,” versus just 38 percent who said it “can be different from sex assigned at birth.” That’s up from June 2021, when 56 percent answered that manhood or womanhood was determined by birth sex, and 41 percent said it could be different from birth sex; and that’s up from September 2017, when 54 percent said the question was “determined by birth sex,” and 44 percent said “can change.” In other words, ChatGPT’s position is at odds with a consistent and growing majority of Americans. It’s the Left’s prerogative to argue that those views aren’t worth engaging. But one could be forgiven for doubting their constant paeans to majoritarian democracy in any number of other areas.

It’s true that AI systems reflect “the biases of the inputs” that they’re trained on, as Gault writes. They’re incapable of being truly neutral on values. But the prudent response to that fact, in light of AI’s potential techno-political power, is to have a serious, open, and democratic discussion about which values should be encoded into these algorithms, rather than foisting left-wing, minoritarian preferences on everyone. Technologies such as ChatGPT appear poised to enforce a distinctly progressive value system — even in areas where those values are at odds with the beliefs of most Americans — without having considered the interests of the people who may eventually rely on these technologies. That should worry any free, self-governing people.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version