Law & the Courts

The Social Media Censorship Dumpster Fire

(Dado Ruvic/Reuters)
Censors can behave in unpredictable, arbitrary, and capricious ways -- and no one has a sufficient monopoly on truth to serve as philosopher king over speech and debate.

I’m getting an eerie sense of déjà vu. I’m old enough to remember the first wave of modern social-justice censorship, when colleges and universities assembled the best and brightest minds to determine exactly the right way to cleanse their campuses of “hate speech” while still preserving some semblance of a marketplace of ideas. The result was the campus speech code, and they spread like a virus from coast to coast until a large majority of colleges promulgated one or more policies that suppressed constitutionally protected speech.

And, wow, were some of those policies bizarre and incoherent. Here are some of the greatest hits, taken from a few of my old cases:

Penn State University declared that “acts of intolerance will not be tolerated.”

The Georgia Institute of Technology prohibited “denigrating written/verbal communications (including the use of telephones, emails and computers) directed toward an individual because of their characteristics or beliefs.”

Temple University banned “generalized sexist remarks.”

And here’s my personal favorite, from Shippensburg University, a state school in Pennsylvania: “Shippensburg University’s commitment to racial tolerance, cultural diversity and social justice will require every member of this community to ensure that the principles of these ideals be mirrored in their attitudes and behaviors.” (Emphasis added.)

In each of these cases, my clients either won a court judgment or a legal settlement. And I was hardly the only lawyer who successfully sued public universities to challenge speech codes. In fact, never once has a court upheld a speech code on the merits, and often the cases arose under the most bizarre of fact patterns — where universities engaged in rank favoritism, frequently focusing their ire on Christian conservative dissenters.

One of the most outrageous cases involved a young Christian woman named Emily Brooker, a student at Missouri State University. She refused to participate in a classroom assignment that would have required her to engage in public advocacy in support of LGBT rights. She was immediately brought up on academic charges and then required to “close the gap” between her faith and the values of the university’s social-work program. She sued, she won, and the university disciplined the offending professors.

Why bring this up now? Because it’s worth reflecting on the utter failure of previous social-justice censorship regimes when evaluating the newest attempts to put a bunch of (mainly progressive) smart people in a room to socially engineer the marketplace of ideas. I’m speaking of course about the social-media speech wars.

This week, two important reports, one in Vanity Fair and the other in The Verge, document Facebook’s struggles to moderate “hate speech” while still preserving its open platform. The two stories complement each other perfectly. The Vanity Fair report is a top-down look at the company’s efforts to design policy, while The Verge reports from the trenches — taking a deep dive into the real life experiences of Facebook’s content moderators. Neither piece is remotely reassuring.

Vanity Fair launches its story with an anecdote about Facebook’s systematic purging of the statement “men are scum” during the height of the #MeToo revelations. It turns out that Facebook was purging a primarily political statement even as it was leaving up all kinds of other content that its users found hateful or offensive. This led to a debate that perfectly illustrates the challenge of hate speech restrictions.

Should Facebook “punish attacks against gender less harshly than, say, attacks against race”? Well, that idea fell apart as soon as Facebook leaders pointed out that the policy would provide heightened protection for statements like “women are scum.” Then came the idea to “treat the genders differently.” In other words, protect women and not men.

That sounds like social justice, right? But there’s a complicator. What about all the other genders? As one Facebook executive said, “We live in a world where we now acknowledge there are many genders, not just men and women. I suspect the attacks you see are disproportionately against those genders and women, but not men.” But is it commercially viable to protect 55 out of 56 genders? That starts to look like Facebook is targeting men.

But let’s say Facebook figures out what it believes is the right formula. It’s not always self-executing. That’s where the real live human moderators come in, and that is censorship sausage you do not want to see made. As The Verge reports, the moderators, tasked with policing all of Facebook’s most disturbing posts, end up often traumatized by the images and — oddly enough — sometimes believing the fake news:

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

It’s incredibly reassuring to hear that flat-earthers and 9/11 truthers can also double as the speech police. Oh, and arguments about censorship decisions can get so intense that moderators would threaten supervisors with physical violence if they overruled their decision.

And of course, these moderators are applying policies that are often utterly incoherent and sometimes made up “on the fly.”

A post calling someone “my favorite n—–” is allowed to stay up, because under the policy it is considered “explicitly positive content.”

“Autistic people should be sterilized” seems offensive to him, but it stays up as well. Autism is not a “protected characteristic” the way race and gender are, and so it doesn’t violate the policy. (“Men should be sterilized” would be taken down.)

There are inherent costs when any company (or campus) that is intent on creating a marketplace of ideas turns its back on the accumulated experience of centuries of American free-speech traditions and jurisprudence. There are reasons, for example, why there is no “hate speech” category in American constitutional law. There is no workable definition that does not sweep too broadly. There are reasons why viewpoint neutrality is the hallmark of First Amendment jurisprudence. Censors can behave in unpredictable, arbitrary, and capricious ways — and no one has a sufficient monopoly on truth to serve as philosopher king over speech and debate.

Moreover — and this is important — social-media users have less grounds to demand censorship even than college students. College students, after all, can’t block or mute bad speech. They have to engage it, in person, in the classroom, quad, and dorm. They have to learn how to answer bad speech with better speech.

Social-media users, by contrast, have the ability to carefully regulate their own environment. They can block bad voices. They can limit who sees their posts, and they can limit what they see themselves. A user can create a custom experience that cleans out all the trash. Would it perhaps be a better investment by Facebook to train users in curating their own feeds? Instead, it’s empowering an army of would-be censors — outraged users who not only demand that they not see offensive content but also that no one else see it either.

Such a demand may be appropriate for personally targeted harassment (especially of non-public figures), but it gets much less justifiable when attempting to censor ideas.

If Facebook (or Instagram) wants to be family-friendly, it can apply the kind of constitutionally valid decency rules that govern broadcast media. If a social-media platform wants to be more edgy (like Twitter), it can warn users that, for example, pornographic content is permitted. But in any case, the cardinal principle should be viewpoint neutrality. The primary response to offensive content should be blocking or muting, not banning and suspending.

Yes, this puts the “burden” on individuals to click a button. But this is far preferable to putting the burden on even well-meaning men and women to regulate speech coherently, fairly, and consistently. The long history of censorship shows this is not possible. Even the brightest minds fail. Just ask a generation of campus administrators — men and women who’ve been forced by law and reason to repeal speech codes or leave them dormant and unenforced.

Or, to use the language of Silicon Valley, we’ve beta-tested technocratic censorship before, and the software failed. And now Censorship 2.0 is suffering the same flaws in its code. It’s time to pull the product. Let’s return to First Amendment principles online — and let the chips fall where they may.

Exit mobile version