Science & Tech

Google Crusades for ‘Fairness’

(Aly Song/Reuters)
Google appears to prefer proactively shaping reality to faithfully reflecting it.

James O’Keefe’s Project Veritas operates on the ethical periphery, and to some degree, it has to. In its efforts to infiltrate and expose the world of clandestine bias buried in left-wing institutions, it has played with proverbial fire, emerging both burned (most notably by trying and failing to sell a Washington Post reporter on a fabricated sexual-assault allegation) and as kindling in the immolation of institutions begging to be burned. It’s a polarizing organization, in that way, and its videos and leaks ought to be evaluated on their own merits.

Project Veritas recently released clips of a conversation it recorded between a Veritas operative and Google’s Head of Responsible Innovation, Jen Gennai. The content of their conversation, combined with internal memos released and explicated by an anonymous internal whistleblower, were meant to demonstrate a pervasive and insidious left-wing bias at Google. How insidious that bias is remains an open question, but if O’Keefe’s footage and documents are to be believed, there are certainly people at the company promoting intersectional and other critical theories designed to influence algorithms and search outcomes.

Take, for instance, the so-called “Machine Learning Fairness” algorithms used by Google, designed to avoid producing results that are, as an internal memo describes, facilitated by the “unjust or prejudicial treatment of people that is related to sensitive characteristics, such as race, income, sexual orientation or gender through algorithmic systems or algorithmically aided decision-making.” Google calls this phenomenon “algorithmic unfairness,” which sounds benign enough; later in the document, however, Google expounds upon precisely what this means in practice.

When a search result “is factually accurate” — or, in other words, when the company’s search algorithm delivers an accurate and precise representation of the world as it is — Google insists that this can “still be algorithmic unfairness.”

The memo lays out an example of this phenomenon: “Imagine that a Google image query for CEOs shows predominantly men. Even if it were a factually accurate representation of the world, it would still be algorithmic unfairness.”

If this memo is genuine — and it certainly comports with the spirit of Google’s publicly available summary of its Machine Learning Fairness philosophy — then the company is indeed more interested in proactively shaping reality than faithfully reflecting it. This wishful reconstruction of reality is the charge of activists, writers, and the creative class, not the world’s largest search engine and information company. A search engine that dabbles in sanitizing basic realities that are inherently political — in a way distinct from filtering out violent or pornographic material — is abdicating something essential about what a search engine is for. If it isn’t charged with faithfully reflecting reality as it is, a search engine becomes little more than a canvas for the biases of its programmers.

And if the attitudes of some Google executives are representative of those biases, that canvas might well be imbued with political assumptions.

The clips of Project Veritas’s surveilled conversation with Google executive Jen Gennai are presented in a vacuum, and, for her part, Gennai insists her remarks were “taken out of context.” That’s possible. But her remarks do, to some extent, stand on their own as indicators of her political views and the effect that those views have on her approach to fostering “Responsible Innovation.”

Discussing what “fairness” means to her, Gennai insists that her “definition of fairness and bias specifically talks about historically marginalized communities; and that’s who I care about. Communities who are in power and have traditionally been in power are not who I’m solving fairness for.” This comports quite well with what appear to be the philosophical bases of the “Machine Learning Fairness” algorithm. If these documents and conversations are to be believed, Google is intentional in its desire to avoid further perpetuating the influence of those “communities” that “have traditionally been in power.”

It would seem rather important, then, to be in the good graces of Jen Gennai as she chooses what communities to “[solve] fairness for.”

Later in the conversation, Gennai expressed the pressure Google feels to “fill the gap of what should be done” if “the White House and Congress won’t play a role in making things more fair.” The context of these remarks is not provided, and it’s easy to read too much into what she’s saying here.

That said, to whatever extent it is the rightful prerogative of “the White House and Congress” to make “things more fair” — a more loaded phrase, I cannot conceive — Google assumes the awesome responsibility of presuming it can act as an extra-governmental facilitator of “fairness.”

Whatever the flaws of James O’Keefe and Project Veritas — and there are many — most of the statements Gennai makes in that video are revealing in themselves. If Google thinks itself in the business of discarding “factually accurate representations” of reality in its search results and insisting, to quote an internal document, upon contemplating “how we might help society reach a more fair and equitable state,” it has become something other than a dispassionate search engine.

But maybe that’s the point.

Exit mobile version