Earlier this week I wrote a piece for the New York Times where I proposed a centuries-old framework for resolving our endless series of online free speech controversies. Social media companies — dedicated as they are to creating a marketplace of ideas — would do well to voluntarily adopt First Amendment speech doctrines to regulate their platforms. By that standard Alex Jones can be purged not because his speech is “hateful” or “dehumanizing” (terms that are almost infinitely malleable and subject to ideological manipulation) because it’s libelous. He maliciously makes false assertions of fact and injures innocent victims along the way. Just ask the Sandy Hook families how much damage Jones has done.
The great virtue of modern First Amendment jurisprudence is its near-absolute commitment to viewpoint neutrality. Courts don’t recognize exceptions for “hate speech” or “dehumanizing language” in part because they’re inherently vague and in part because — in practice — they’re always tied to disfavored viewpoints. As Facebook, Twitter, and YouTube demonstrate with depressing regularity, even progressive ideological monocultures have trouble applying their hate speech or “hateful conduct” policies consistently or coherently.
In other words, they have no better luck than their counterparts on campus, where the free speech wars have raged for a generation, with no end in sight.
But to embrace the viewpoint neutrality of the First Amendment is not to declare that “anything goes.” Laden within First Amendment jurisprudence are exceptions for defamation, obscenity, pedophilia, threats, and harassment. Not every communication is protected expression, but limitations on expression cannot be based on the viewpoint of the speech.
As I dug down into objections to my proposed First Amendment framework, I often found that the objections were ultimately based on a desire to discriminate on the basis of viewpoint, on a desire to use the power of the platform to privilege some voices and suppress others. “Free speech for me but not for thee” is a heckuva drug. Big tech is the place where all too many members of the social justice left show us exactly the kind of “marketplace of ideas” they hope to create. In the final analysis, they want to engage in viewpoint discrimination and are pleased that rules against hate speech are vague and subjective. That subjectivity grants them enormous power to regulate the marketplace — at least so long as they hold the high ground in the battle for corporate control.
There were other, non-viewpoint related objections to my proposal. The most consequential (in my view) is the entirely fair critique that to regulate behavior on the basis of common-law concepts like defamation merely substitutes one thorny decision (how do we adjudicate defamation?) for another (how do we define hate speech?) To be clear, there is no way to painlessly and easily implement any form of regulatory regime — especially on platforms as large and complex as Twitter and Facebook. There will be stumbles. There will be failures. But I submit that it will be easier and cleaner to implement rules based primarily on objective standards that have been developed, refined, and honed in countless court cases than it is to implement rules that have no fixed definition, no history of coherent adjudication, and no meaningful standards to guide enforcement.
Finally, yes I know and understand that Facebook, Twitter, and the rest operate in nations that lack any First Amendment traditions. As much as I’d like for American social media companies to nudge other nations in the direction of free expression, their policies in, say, Germany will of course have to reflect German law. If we can use the First Amendment to bring some form of a truce to American Twitter, that’s good enough to me. America First. Peace abroad can wait.