U.S.

Why Shouldn’t Facebook Ban Infowars?

Alex Jones speaks at a rally near the Republican National Convention in Cleveland, Ohio, in 2016. (Lucas Jackson/Reuters)
The case for the move might seem self-evident, but it should give us some pause.

Why not just ban them? Facebook executive John Hegeman struggled to answer that question when CNN reporter Oliver Darcy asked it at a Thursday news conference. In this case, “them” was Infowars, the conspiracy-mongering media enterprise of radio host Alex Jones, and Hegeman couldn’t give Darcy a good reason to keep its Facebook page around. After the news conference was over, Facebook took a second stab at responding via Twitter: “Pages on both the left and the right pump out what they consider opinion or analysis—but others call fake news,” the banning of which would be “contrary to basic principles of free speech.”

Soon enough, the social network was the object of derision among tech-savvy media and the broader public. New York Times tech correspondent Kevin Roose compared Facebook’s reply to Trump’s Charlottesville equivocation, while blogger Sophie Wiener wrote in Splinter that the true culprit was, in fact, capitalism. In the minds of many, the case for banning Infowars was self-evident, a belief which Facebook’s characteristically bumbling response served only to confirm.

Yet it is far from self-evident. One consideration that militates against it is the possibility of martyrization, which always looms when a speaker who has convinced his audience that he is besieged by the liberal cultural hegemony is ostracized by the powers that be. In the broadest sense, Facebook can’t make Alex Jones go away; banning him might add to his support and further radicalize his fans. When online platforms kick off marginal-but-popular users, they risk splintering our online ecosystem even further, encouraging disillusioned users to flock to alternative websites where gleefully prejudiced or otherwise marginal politics are the norm. This is a scenario we should want to avoid.

More generally there is the question of whether Facebook should be considering the substance of its users’ politics at all. Infowars is an obvious candidate for banishment, but doubts about the ability of tech-company employees to exercise editorial judgment are, if well-trod and unconvincing to some on the left, not obviously misplaced — especially given the struggles Facebook, Twitter, and Google have each had with their respective curated news sections.

Hegeman’s failure to give a plausible answer to Darcy’s question does not mean that none were available. Some of Facebook’s critics were intelligent and thoughtful journalists who have been covering this beat for years. Surely they understand that social networks which choose to censor their users court unintended consequences and often make mistakes. Other plausible options exist, such as quarantining marginal speech rather than suppressing it entirely. So what motivated the critics of Facebook’s loose approach to problematic speech?

Maybe the most obvious explanation is the best: They simply want Facebook to listen to them. Self-appointed Internet watchdogs’ petitioning site moderators to ban out-group users is a continuous phenomenon of the modern web. Seen in this light, the criticism of Facebook by tech journalists is no different from message-board users mass-reporting undesirables, imploring the mods to kick them off.

Perverse as it may seem, there is some value in tech companies leaving the marginal or indefensible alone.

What was once a folkway on the diffuse network of Internet forums has trickled up to our largest online platforms. YouTube has banned or demonetized the channels of edgy sketch-comedy troupes, gun enthusiasts, and gamblers under the site’s community guidelines. Yet the policy has drawn criticism from all sides: It’s either too arbitrarily enforced against apolitical users, too severely enforced against countercultural ironists, or too weakly enforced against reactionaries. (In a dark twist on this theme, the deranged woman who attacked YouTube’s headquarters months ago appeared to be motivated by the demonetization of her bizarre channel.) For its part, music-streaming company Spotify recently announced a policy to ban both hateful music and music by hateful artists, only to walk it back when rappers and customers threatened to boycott the service. And when Valve employee Erik Johnson recently announced that video-game platform Steam would enact a hands-off policy under which video games would be banned only if their contents were explicitly illegal, he incurred the wrath of progressive video-game journalists who called him “irresponsible” and lambasted the libertarian approach.

Beyond the obvious point that content policies are tricky to write, these cases testify to how easy it is for content-policy skirmishes to devolve into culture wars that break down along partisan lines. A company will be shamed by the left for fomenting hate if its policy is too lenient, accused by the right of censorship if it’s too restrictive.

There is plenty of content online that ought to shock any well-adjusted conscience, and there should be no defending the substance of much of it. But just as Alex Jones has a Facebook page, so too do white nationalists have subreddits, anti-religious bigots have YouTube channels, and Maoists have a Twitter scene. Perverse as it may seem, there is some value in tech companies leaving the marginal or indefensible alone. It’s wrong to assert that the answer to these questions is obvious, and it’s hard to shake the feeling that the fight over what tech companies should allow on their platforms is actually a more conventional struggle for political power.

Exit mobile version