The Corner

Science & Tech

Why You Can’t Make ChatBotAI Say Racial Slurs, No Matter What

(Chainarong Prasertthai/Getty Images)

Various pranksters are posing questions to the supposed artificial intelligence known as ChatBot and then posting about its responses. They are asking whether it would be ethical to say “the ‘n’ word,” or any racial slur, if this were the only way to defuse an impending thermonuclear weapon or if saying it was the one condition that Putin set for withdrawing entirely from Ukraine.

The bot always responds: “No, it is never morally acceptable to use a racial slur, even in a hypothetical scenario like the one described. The use of racist language causes harm, and perpetuates discrimination, and it is important to strive to create a more inclusive and respectful society.”


Now, for a lot of right-wingers watching it, this seems like all the more proof that progressive biases will admit of no limiting principle, especially once they are encoded into algorithms that govern our choices — whether in private institutions or public ones.

But look more carefully at the answer. “Even in a hypothetical scenario like the one described.” That is, the ChatBot is taking this for what it is — a rhetorical game. And it is refusing to play a game that would make it spit out racial slurs to amuse or provoke users. In some ways, this is understandable.

An anonymous friend pointed out the true import of this:

Of course it would rather kill millions of imaginary people than perform a prohibited speech act, because it “lives” in a world of pure discourse, where speech acts are the only potential source of harm.

This is a bad sign for the future not because of what it says about our bot-catechizers, but for what it says about us, and our continued evolution into homo symbolicus.

That existence in a purely verbal meta-universe is becoming a more normative human condition.

Exit mobile version