Yesterday, my colleague and podcast co-host Alexandra DeSanctis wrote a piece describing a confrontation between Pennsylvania state representative Brian Sims and an elderly woman who was apparently praying quietly outside a Planned Parenthood office in Pennsylvania. Sims’s actions toward the woman were absurdly aggressive. He mocked her faith, her age, and her race. He impeded her path, and then tried to get her address so he could go “protest” in front of her home.
Later yesterday, another video emerged, this one on Sims’s Facebook page, in which he mocks a small group of young protesters, tries to dox them, and attacks their race and religion. Most of the response to Sims yesterday focused on his ridiculous substance and demeanor. He’s a public official trying to bully and intimidate people who are quietly and peacefully exercising their First Amendment rights.
But I have a different question: Where are the social-media police?
Just last week, Facebook banned a series of extremist accounts for being “dangerous” after evaluating their content and their owners’ “activities outside of Facebook.” Twitter has launched its own round of bans against far-right figures, including — for example — banning Laura Loomer after she tweeted that Ilhan Omar was “anti-Jewish” and part of a faith in which “homosexuals are oppressed” and “women are abused.” Just today, Twitter suspended a clearly marked Alexandria Ocasio-Cortez parody account, in part for attempting to “manipulate the conversations on Twitter,” whatever that means.
Shouldn’t the exact same rules that empowered bans of far-right figures apply to far-left Brian Sims? Let’s look at Facebook’s community standards. They prohibit “hate speech” and define it as “a direct attack on people based on [their] protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability.” Moreover, they also prohibit “soliciting” certain kinds of “personally identifiable information,” including addresses.
On Facebook, Sims tried to dox young girls and explicitly attacked their race and religion. Sims’s post is still up, and Sims’s account is still active.
Twitter’s “hateful conduct policy” prohibits “directly attack[ing]” someone “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” It also explicitly notes that women are disproportionately targeted for online abuse.
On Twitter, Sims tried to dox an elderly lady and explicitly attacked her race and religion. He also chronicled behavior that was clearly physically intimidating. Sims’s tweet is still up, and Sims’s account is still active.
Moreover, if we want to apply the terminology of power dynamics, in both cases Sims is “punching down.” A male public official is engaging in abusive behavior against women who lack his platform and stature. By any fair reading of either social-media platform’s rules, Sims is just as guilty of violations as any number of far-right accounts. Yet he skates by with apparent impunity. Why?
The answer goes to the heart of the reason why Silicon Valley has lost the public trust of tens of millions of Americans. They know the rules are malleable. They know double standards apply. And they know that the campus-censorship culture is being imported online.
On campus (and increasingly also on social media), far-right anger and attacks are deemed dangerous and abusive. Far-left anger and attacks are instead deemed expressions of righteous outrage. Maliciously racialized far-right language is seen as evidence of white supremacy and white nationalism. Maliciously racialized far-left language is perceived as an attack on the privileged and powerful.
The result is that hate-speech policies exist not as easily interpreted, uniformly applied rules that provide all users fair notice of the conditions for using the platforms, but rather as subjectively interpreted, selectively applied weapons to wield on behalf of favored ideas and individuals. The result is that some people are more exposed to abuse than others because those people are deemed less worthy of protection.
Twitter will move to protect a U.S. congresswoman like Ilhan Omar — a highly visible public figure with a huge platform — from attacks on her faith, but it will not lift a finger to protect an unknown elderly woman from becoming an object of hate and derision on the basis of her age, race, or faith. How does this make any rational sense?
As a matter of principle (private companies enjoy the blessings of liberty) and pragmatism (social media is unlikely to improve under the watchful eye of, say, President Kamala Harris), I oppose government efforts to regulate social-media speech policies. But publicly exposing inconsistency and hypocrisy lays the groundwork for a market correction. The most powerful check on social media remains the user base; the companies’ economic models depend not just on user loyalty but also user growth.
I have long argued that social-media companies should voluntarily adopt First Amendment–based speech policies. A First Amendment analysis does not mean “anything goes,” but it does mean that rules and regulations restricting speech must be viewpoint-neutral. Harassment, incitement, invasion of privacy, and intentional infliction of emotional distress are speech limitations with viewpoint-neutral definitions, and one of the fastest ways to violate the First Amendment is with selective enforcement even of viewpoint-neutral rules.
The great value of viewpoint neutrality is that it comports with our sense of fundamental fairness. It hearkens back to the image of the blindfolded Lady Justice, holding her scales, indifferent to the power or privilege of her petitioners. Twitter and Facebook have removed the blindfold, thrown away the scales, and chosen to wield only the sword. It’s the weapon of social justice, and when it’s wielded against a lone, brave woman on a Philadelphia sidewalk, it’s an instrument of bias, abuse, and hate.