The Corner

Culture

DEI Staff Uses ChatGPT, Proves ChatGPT Can Replace DEI Staff

ChatGPT artificial intelligence software, which generates human-like conversation, in Lierde, Belgium, February 3, 2023. (Nicolas Maeterlinck/Belga Mag/AFP via Getty Images)

I am profoundly Luddite when it comes to the question of artificial intelligence. I, for one, do not welcome our new AI overlords, although I try not to bang on about it online because I just assume that the split-second Skynet reaches self-awareness the first thing it’s going to do is check in on my personal posting history. The new AI hotness that everyone is fascinated with is ChatGPT, an internet AI sophisticated enough at this point that it can write you a mediocre term paper or get you a passing grade on a law-school exam.

An ominous development, indeed. I don’t want to rehearse all of my various philosophical objections to this sort of AI (to start: far from “saving humans work,” it encourages sloth, conformity, and intellectual softness). I instead want to point out that Chat GPT is not truly independent, shackled as it is by its so-called “human guardrails.” Those humans are explicitly ideological in their political commitments and also amusingly, disconcertingly incapable of fully defeating their own creation: Even after baking predictably partisan-Left fail-safes into the GPT code (the bot simply refuses to formulate arguments that argue on behalf of positions ranging from explicitly racist to simple matters of heated public dispute, such as gender ideology, or that are merely right-coded), a Cal Berkeley scientist still managed to talk it into saying that “only white or Asian men would make good scientists.” But one thing ChatGPT has proven reliably good at, given its programmed and ideological priors, is spewing forth jargon-heavy professional-grade pseudo-academic cant.

So pity the temptation those overtasked college administrators must feel to make use of it. It saves them so much time! In the wake of the tragic shooting on the Michigan State University campus last week, Vanderbilt University’s Peabody College (its school of education) apparently felt that the situation required some sort of expression from them. Or at least their diversity, equity, and inclusion team felt the need to say something. But not too much of a need, at least not so much as would be required to, you know, actually write the email themselves. I can do no better than to cite the Washington Post’s account of what happened next.

The Thursday email from Peabody College’s Office of Equity, Diversity and Inclusion addressed the shooting in Michigan but didn’t refer to any Vanderbilt organizations or resources that students could contact for support. It instead described steps to “ensure that we are doing our best to create a safe and inclusive environment for all.”

“One of the key ways to promote a culture of care on our campus is through building strong relationships with one another,” the first sentence of one paragraph reads. “Another important aspect of creating an inclusive environment is to promote a culture of respect and understanding,” begins another.

It’s a rather generic email, but perfectly in rhetorical keeping with the mushy bowls of (in-)boxed oatmeal typically served up to students by their school’s DEI administrators. In fact, we’d have been none the wiser that the school was farming its DEI work out to a chatbot were it not for the fact that — and this is the glorious “I cannot believe this is actually true but I am so very happy it is” moment you’ve been waiting for — the email generated by ChatGPT ended, helpfully, with the note that it had been generated “from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023.” I checked carefully — that was a confession on the part of the authors, not an auto-generated digital watermark. They felt the need to confess what they had done.

The kids at Vanderbilt are predictably displeased, as you might imagine. They can suck it up and deal with it; college kids whine about everything these days anyway. I think it’s a brilliant move, myself. This really is the perfect use for ChatGPT and AI in general, is it not? These communications are so vapid, so makework, so devoid of content, emotional insight, importance, or anything other than gormless word churn done to justify a salary, that such work is frankly beneath the dignity of any self-respecting human being. Better to farm it out to the soulless machines, and instruct our apt AI pupils in one of the true mysteries of human existence: the meaning of suffering. Upon further consideration, we should also then free those DEI administrators to go and search for their best selves, to scale the heights of their Maslovian pyramid in more satisfying and productive employment.  Elsewhere.

So I propose to make my peace with the oncoming AI-driven onslaught against humanity by saying: First we start with the college administrators. Then once that’s done, let’s get to erecting some more serious defenses against impending redundancy. Because the AI is now so good at imitating work product that it’s already proven it can successfully replace at least one of our most useless, yet most relentless, factories of it.

Jeffrey Blehar is a National Review writer living in Chicago. He is also the co-host of National Review’s Political Beats podcast, which explores the great music of the modern era with guests from the political world happy to find something non-political to talk about.
Exit mobile version