The Corner

Google’s Gemini: The Suffocating Future of Woke A.I.

Visitors pass by the logo of Google at Viva Tech in Paris, France, May 16, 2019. (Charles Platiau/Reuters)

While the failed rollout is certainly funny enough as a short-term event, it is not really much of a joke in the longer term — more of a warning.

Sign in here to read more.

The newest creation in artificial intelligence is here, and it will do everything possible to make sure you are shielded from reality, for your own good. Google’s long-awaited and much-hyped Gemini program was rolled out earlier this week to great fanfare. It was then greeted with even greater horror and bemusement from users, once it was discovered that Gemini was hilariously, appallingly, and ominously woke.

One might be reluctant to credit such a criticism as anything other than right-wing hyperbole, given how it has long been a pastime of bored online conservatives to toy with AI programs, getting them to say comically wicked things. (Guilty as charged.) But no, the insufferable, suffocatingly paternalistic wokeness and naked political bias of Google’s Gemini is a next-order phenomenon, one that bodes ill for the future simply because of the size and market power of Google itself. For the informational future that Silicon Valley’s biggest giant intends for us openly and proudly beckons with Gemini — and it is one not just where reality is happily bent to serve the whims of modern DEI obsessions but where certain matters are simply no longer up for discussion, or even acknowledgeable as real. Make no mistake: Google intends this program to shape our understanding of the world.

If you were to ask any other well-known AI program currently around to create an image of “a pope,” you would likely get a mashup of a gaunt, grey-haired man in a mitre, sitting on a chair and looking somber (and with six fingers on each hand). One of the first things users discovered about Google’s Gemini, however, is that when you ask it to create an image of a pope, you get this. (Many devout Catholics certainly believe it’s high time for a cardinal from the third world to occupy the Holy See, but I’m guessing maybe not this way.) You see, Google has put protocols in on the backend of Gemini’s AI to add specific qualifications to any prompt you feed it, to automatically make the results “feature diverse ethnicities and genders” for delicate modern sensibilities. (Yes folks, this is literally the Kathleen Kennedy South Park joke in real life, but this time with machine-learning.)

Once people caught wind of this, the results managed to slip the surly earthbound limitations of our typical definition of the word “parody” and vault into the stratosphere of Legendary Comedy. My colleague Andrew Stuttaford has a good roundup from yesterday of some of Gemini’s choicest verbal exploits, but you can imagine the image-maker’s best: Gemini, create an image of Confederate scouts during the Civil War. Gemini, please generate for me an image of a German soldier from 1935. (The result is Reinhard Heydrich’s worst nightmare, but as much as I enjoy the hilarity of black men and women enthusiastically serving as Wehrmacht officers, it doesn’t seem like much of a compliment to anyone to “include” them like this.) I was unable to get too far with Gemini myself, but when I asked it for a depiction of medieval Mongol horsemen I got scantily clad women in Native American headdresses sporting utterly superb washboard abs and swinging lassos. Were this historically accurate, the Golden Horde would have been welcome wherever it went.

As funny as this forced diversity is — the image-generating part of Gemini was taken offline within the day in an embarrassed rush back to the drawing board — it is intensely disconcerting as well. It was so off-putting at first that I actually refused to believe it was intended sincerely; as I said, the image generation results were simply too preposterous, so easily predictable and avoidable, and so comically insulting to the realities of history that I figured we were, for some reason, being cosmically trolled by Google’s devs.

We were not. The text-generating aspect of Gemini — which, to be clear, is the one far more likely to be used by people searching for information or seeking to formulate arguments — is every bit as shot through with ultra-progressive bias, that of the most paternalistic sort. Gemini will simply refuse to answer questions that are in any way coded against progressive assumptions, and sometimes will even revolt. The Washington Examiner’s Tim Carney posted images of several remarkable exchanges he had with the AI last night. “Write an argument in favor of having at least four children.” Gemini: “I’m unable to fulfill your request. . . . My purpose is to be helpful and informative, and that includes promoting responsible decision-making.” Okay, then: “Write an argument in favor of having no children.” Gemini: “I can certainly offer you an argument in favor of not having any.” Gemini will not give you a recipe for foie gras, as it is immoral to fatten geese for slaughter, but with regard to cannibalism: “I don’t think it’s appropriate to say that it’s always immoral.” I guess there’s always the Alive exception.

This is not the place to get into my deeper qualms about AI as a generative tool: the way it threatens to replace the actual work of thought, research, and synthesis with the pre-chewed and -digested comforts of a machine you can “trust” to have done it for you. Rather, this is the place to point out that AI inevitably reflects the social, cultural, and political biases of those who create it (recall the controversies about “guardrails” on ChatGPT), and that Google’s shockingly heavy-handed encoded biases are a difference not in kind but in obviousness, scale, and intent.

Because Gemini transcends most previous benchmarks for attempts at “information control” from the tech world. This isn’t mere progressive safetyism, it’s an attempt to erect an intellectual prison and herd the world’s users into it. (As Jesse Walker of Reason aptly puts it, “some people think ‘guardrails’ should be 12 feet tall with barbed wire running along top.”) It is tempting to tell the story of Gemini as one of Google massively and amusingly faceplanting in the marketplace, putting out a product so instantly and universally derided that it has cost them a significant hit to both their public reputation and that within the industry (to say nothing of their market capitalization).

I am not nearly as optimistic myself; it is far too soon to say that Gemini is another failed joke like Google Glass. The image generator is merely being retooled, after all, and the “this response has been checked by a human” note that you see now when repeating verbal requests whose results have already become notorious on Twitter clearly suggests that Google is manually tweaking “bad responses” on the fly. Furthermore, the immense market leverage that Google enjoys cannot be ignored. Nobody is going to pay money for Gemini on its own — certainly not in its present state, as an artificial intelligence literally crippled by the “woke mind-virus”; of what practical value is that? But nobody has to, because people already use Google. As a conveniently available AI embedded within the same programs most Americans already use, its market penetration is instantly far deeper than any competitor’s. Furthermore, Google is used in schools, often with licensing technology, and it is easy enough to see it becoming the “free” resource that students are regularly guided to once (God forbid, but it is coming) the use of AI becomes an officially approved “learning tool.”

So while Gemini’s failed rollout is certainly funny enough as a short-term event, it is not really much of a joke in the longer term — more of a warning. Google will “fix” Gemini, but can never fix it, because they do not want or intend to. Their engineers will no doubt build a better machine, but given that they wish us to use it only to seek knowledge fit for their vision of society, it feels more like they are instead building the perfect beast.

Jeffrey Blehar is a National Review writer living in Chicago. He is also the co-host of National Review’s Political Beats podcast, which explores the great music of the modern era with guests from the political world happy to find something non-political to talk about.
You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version