The Corner

Business

AI: Weird Scenes inside the Google Gold Mine

Google logo at the Viva Tech start-up and technology summit in Paris, France, in 2018. (Charles Platiau/Reuters)

Google blundered badly in releasing an AI tool that so visibly reflected some of the biases running through its corporate ideology. Long-standing suspicions that the company tilted to the left appeared to be confirmed (once and for all) by the, uh, interesting artwork generated by Gemini.

A picture is worth a thousand words, and all that.

Writing in his substack, The Ruffian, Ian Leslie:

People say Big Tech is evil, but I think Google has been admirably public-spirited in providing the world with a laugh during these dark times. In case you missed this story, I’ll summarise. Last week, Google released Gemini, an AI chatbot and image-generator intended to compete with similar apps from Open AI and others. It could hardly have been a more important launch, and it could hardly have gone more wrong. . . .

These images [Leslie gives examples] weren’t generated by a few mischief-makers fiddling with prompts; they came up again and again in response to standard questions. The problem wasn’t just restricted to image generation, and it wasn’t just about diversity. Gemini had a very distinct worldview (I’m using the past tense because, in response to the furore, Google has paused its image generation tool and neutered its chatbot). I guess you could call it woke, but that doesn’t quite convey how extreme it was, or how silly. It’s more like someone performing a crude parody of woke.

Then again, true believers can often come across as parodists of their own ideology.

Leslie asks how Google could come up with a product that was so “obviously faulty” and suggests that the company’s culture was such that those working on Gemini did not believe that its content was flawed.

The company’s initial explanation for the ‘diverse imagery’ problem is that it was just a bug, rather than a feature of the system. That seems disingenuous. This product would have been relentlessly tested and tuned before being unleashed on the world. Gemini’s quirks seem more likely to have been the output of a corporate culture that doesn’t realise how weird it is.

In my recent post on how to fix DEI I suggested that organisations make an effort to understand how the cultural-political worldview of their staff compares to their median user (or voter). It’s not that all organisations should try and be a mirror of the public, it’s that, in a highly politicised environment, they should be self-aware enough to know how the profile of their staff differs from the profile of the people they serve.

The anthropologist Joseph Henrich famously reframed our supposedly neutral, objective Western worldview as a WEIRD one (Western, Educated, Industrialised, Rich, Democratic). His point wasn’t that WEIRD is bad, just that it’s, well, weird; shared by only a minority of the global population. For an individual or an organisation, it’s not necessarily a good thing to be normal, but it’s nearly always a good thing to know when you’re not normal. To adapt the poker truism, if you look around the table and you can’t see who the weirdo is, it’s you.

Last week was, or ought to have been, a moment of self-insight for Google. As Nate Silver puts it, Gemini displayed “the politics of the median voter of the San Francisco Board of Supervisors” — i.e. it behaves like a left-wing outlier even versus America’s educated and relatively liberal classes. If Gemini does indeed reflect the internal culture of Google, a company which serves the whole of the world, then the problem for Google goes way beyond the launch of Gemini.

When Google was only serving us information from other websites, the political outlook of its staff was less of an issue. The genius of the PageRank-based search engine is that it merely shows us what other people are looking at (in essence). As Paul Graham says, it’s just math. Google Search is more like a librarian, telling us where to go to get what we’re looking for, rather than an author. We don’t expect the librarian to be a font of knowledge or wisdom (even if many of them are).

That’s right, I think. When I am researching something on Google (something substantial: I’m not talking about train timetables or restaurants), I will typically take care not to confine myself to the first couple of pages but will keep on digging, going down rabbit holes and so on. Additionally, the sites I visit will frequently make no effort to conceal their biases, allowing me to make the necessary adjustments as I read their content.


But a Gemini is something different from a simple search engine.

Leslie:

It doesn’t just link to external information sources. It gives us information and opinions and pictures directly (even though in reality the app is regurgitating the internet; it’s a librarian disguised as a guru). The credit and the blame for Gemini’s output therefore goes to its creator — to the company behind it.

Leslie includes a chart showing the political leanings of a number of large language models. They are all to be found in the  (left) “libertarian” quadrant socially and economically, although they vary where they are located within that single quadrant.

The chart follows a discussion on “alignment.”

Leslie:

So far, tech companies have defined “the alignment problem” as the problem of aligning the values of AIs with the values of humanity. But as Joseph Heinrich or Jonathan Haidt will tell you, there [are] vanishingly few values we all hold in common. What we have are different clusters of values, with great variation between nations and (at least in democratic countries) within them. Perhaps AI companies should accept this and be transparent about their own cultural politics.

That’s not a bad idea, and it comes with the advantage of not having to present a large number of values as if they were universal, when reality is (as Heinrich or Haidt would explain) much more complex.

Leslie:

I can see a future in which different AI brands cater to different worldviews, as media brands do now. I can also see a future in which a single AI offers different cultural-political options. As David Rozado suggests, instead of a one-size-fits-all set of values, an AI might give the user a clear way to decide what kind of answer they want; how much empirical accuracy versus how much normative value.

As a general rule, I’ll take empirical accuracy over normative value, even more so when those norms are (even if ultimately derived from published sources) based on an opaque technological process. I can also see value in having AI brands of the left, right, and center, although those claiming to be of the “center” (a not infrequently deceptive notion) would have to be treated with care.

That said, if it was questions I wanted answering, I doubt (train timetables and the like apart) I would use any of them except, perhaps, as the beginning of a piece of research. But if I did (and if it was in an area where this could be of relevance) I would try to choose at least one of left, right and “center.”

Exit mobile version