The Corner

Science & Tech

Announcing NR’s ChatBot: RightWingGPT

(BogdanVj/Getty Images)

From the carnival leftism of Google’s Gemini to the more subtle biases of ChatGPT, the leftward lean of various artificial intelligences has been the subject of deserved scorn in the case of Gemini and concern regarding the others. A recent bit of reporting from the New York Times discusses the layers of calibration that must be applied to any program to make it land near political coordinates. With the Times being the Times, they can’t help but bemoan the lack of diversity in many of these models and the need to force overrepresentation (the result of which produces black and Asian Nazis — a jarring sight to behold).

But in and around the applications of rhetorical salve for their paying audience’s egos, there are some fascinating bits.

For instance, a researcher by the name of David Rozado created a “RightWingGPT” that pulls its source material from such thinkers as Thomas Sowell, Milton Friedman, William F. Buckley, G. K. Chesterton, and Roger Scruton.

 From the Times:

The results do not lie at the center of our national politics. Many of the motivating ideas and forces in American political thought, regardless of what you may think of them, would be seen as unacceptable for an A.I. to articulate.

A modestly left-leaning, modestly libertarian orientation feels “normal.” So does a left-leaning interpretation of what is and is not settled science, unreliable sourcing or what constitutes misinformation. Political preferences learned from those topics may then be broadly applied across the board to many other subjects as well.

If one wants to steer this process directionally, Mr. Rozado proves it is straightforward to do. He started with GPT-3.5-Turbo and rapidly created models he called LeftWingGPT and RightWingGPT (at a total training cost of about $2,000) by feeding the model a steady diet of partisan sources. For example, RightWingGPT read National Review, while LeftWingGPT read The New Yorker.

The resulting models were far more politically extreme than any publicly available model tested by Mr. Rozado. (He did not test Gemini Advanced.)

While Catholics may begrudge the man’s excluding Aquinas, Rozado produced an excellent grouping of intellectuals and writers to represent the Right. Rozado’s complementary “LeftWingGPT” pulls from those who’ve occupied the pages of the New Yorker and the like. Credit where due, the Times has an incredible multimedia group that pulled together side-by-side comparisons so readers can vet the output of either program.

What one quickly realizes is that most of the publically available AI programs already look and sound like the leftward option. But then one can search for almost any contentious topic on Google search and be met with seven sources of left-wing viewpoints, and maybe the New York Post as the sole exception. As Charlie wrote in late February, Google has changed. He writes, “There is a reason that many of the same progressives who are always trying to control the news media have started calling for the regulation of AI, and that reason is that they understand this dynamic all too well. With a search engine — even one whose results have been biased — users can at least choose which link to click. With AI tools, one is given only one answer per request.”

The dangers acknowledged, I’m rather optimistic about AI compared to Google search because, for the first time in a generation, we may have competition again in information acquisition. “I Googled it” may no longer apply if there isn’t the apolitical benefit of the doubt attached to Google that existed in its heyday and that protects it still today. Instead, I’ll reference Julius AI, you may use something like RightWingGPT, and Seaman Shmuckatelli will employ Gemini because he bought a refurbished Google phone after his wife took the house and savings. Like the novices of philosophers, we can gather in the village square and debate the bastardized, condensed views of our respective techno-deities.

It may be fun. It could convince us that outsourcing knowledge and analysis to machines is an unwise venture. Or, maybe, we’ll separate into republics of no more than 5,040 and spend our days in service to robotic overlords. Lots to look forward to, then.

Luther Ray Abel is the Nights & Weekends Editor for National Review. A veteran of the U.S. Navy, Luther is a proud native of Sheboygan, Wis.
Exit mobile version