The Corner

The Supreme Court Joins the Section 230 Fight — Halfway

A visiting school group walks along the plaza at the Supreme Court on Capitol Hill in Washington, D.C., February 22, 2022. (Tom Brenner/Reuters)

Lower courts split on a clear and consequential issue of free speech and freedom of association.

Sign in here to read more.

The past two decades have witnessed an enormous growth in social-media platforms, including platforms for discussion (Twitter and Facebook), search engines (Google), video (YouTube and TikTok), web hosting (Amazon), and fundraising and payment (GoFundMe, PayPal, Venmo). There has been an accompanying consolidation on a handful of platforms of political discussion, including journalism, debate, activism, fundraising, and advertising. Section 230(c) of the Communications Decency Act, enacted in 1996 at the dawn of the Internet age, has become a lightning rod for a larger public debate over free speech on the Internet in general and social-media platforms in particular. That debate centers around two questions: (1) Should the law hold social-media companies responsible for harmful, abusive, dishonest, or fraudulent things they don’t censor from their platforms, and (2) should the law hold social-media companies responsible when they do censor people and things from their platforms, but do so in ways that are arbitrary, unfair, or reflect political bias?


As a general legal rule, the First Amendment does not bar censorship by private actors on their own property. As a cultural value, however, if private actors can throttle citizens from participating in public debate or petitioning their government for redress, that is dangerous to the foundations on which both free speech and democracy depend. There are therefore a number of ways in which the social-media free-speech debate challenges our traditional mental models for free speech:

  1. Are the social-media companies engaging in their own speech, or their own freedom of association, when they decide what to exclude from their platforms, or are they analogous to common carriers who must accept all customers?
  2. Are the social-media companies truly engaged only in private action, or are they informally doing the bidding of the U.S. government, or in some cases formally doing the bidding of the Chinese government?
  3. Does Section 230’s grant of immunity from certain types of liability for censoring speech mean that the beneficiaries of that immunity are de facto acting with public backing in censoring content?
  4. Is the stranglehold of a few social-media companies on the market an intractable feature, or is it prone to the same sort of continuous disruption by competition, innovation, and market forces that have characterized Internet speech and commerce and computer technology for the past half century?

As a refresher, Section 230(c) (which Congress captioned as “Protection for ‘Good Samaritan’ blocking and screening of offensive material,” and which it mainly designed in order to let Internet service providers block pornography) does two things:




First, Section 230(c)(1), captioned, “Treatment of publisher or speaker,” states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This is a pro-speech liability protection. It was inspired largely by Stratton Oakmont, Inc. v. Prodigy Services Co., a 1995 New York state court decision that allowed a brokerage firm (Stratton Oakmont) and its president (Danny Porush) to sue Prodigy’s Internet bulletin-board service for defamation for allowing messages to be posted accusing them of fraud. The case seems especially abusive in retrospect, not only because Prodigy had nothing to do with writing the messages, but also because Stratton Oakmont was ultimately shut down by the government, Porush pleaded guilty to fraud, and the firm ended up inspiring the movie Wolf of Wall Street. One could hardly ask for a better illustration of why defamation suits should not be allowed to stifle public discussion.


While there are some conservatives who would like to see defamation laws expanded and shields against defamation reduced, Section 230(c)(1) is more broadly unpopular with the Left than the Right, at least at the conceptual level: Progressives are likelier than conservatives to see speech as a potentially harmful thing that should be more supervised to protect against misinformation and harassment.


Second, Section 230(c)(2) states: “No provider or user of an interactive computer service shall be held liable on account of (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).” While Section 230(c)(1) protects Internet companies from being responsible for other people’s speech on their platforms, Section 230(c)(2) protects the companies’ own power to censor speech on their platforms.

Section 230(c)(2) is the most controversial part of the law for conservatives angry at the political tilt of the censors.


As I have written on previous occasions, while the problems identified by critics of the social-media giants are real ones, I remain skeptical of legal doctrines that would erode the settled distinction between government censorship and private freedom of association. In any event, it has been clear for some time now that the Supreme Court would have to weigh in on a number of debates around Section 230, the First Amendment, and state laws that strike a different balance than Section 230 does — including the Florida and Texas laws that effectively treat social-media companies as common carriers and restrict them from biased censorship on their platforms.

That day has now arrived — for Section 230(c)(1) and related legal doctrines. This morning, the Supreme Court granted certiorari to hear two cases on aspects of the same question. The first, Gonzalez v. Google LLC, is a lawsuit against Google by a man whose daughter was murdered by ISIS. He cannot, owing to Section 230(c)(1), sue Google for hosting ISIS recruitment videos on YouTube, but his legal theory is that Google’s algorithms that recommend videos on YouTube are Google’s own speech. The lower courts have divided on this question, so the Supreme Court will now address it. A loss for Google would have significant implications for the entire business model of search engines and social-media companies (especially TikTok) that depend on algorithmic recommendation to serve up both user content and advertising.

The second case, Twitter, Inc. v. Taamneh, was decided by the Ninth Circuit together with Gonzalez v. Google LLC, but did not even address Section 230. It is also a lawsuit by victims of a terrorist attack, who argued under Section 2333 of the Anti-Terrorism Act, 18 U.S.C. § 2333, that the social-media platforms should have done more to stop ISIS recruitment by censoring content. The Ninth Circuit agreed:

Plaintiffs allege that, at the time of the Reina Attack, defendants were generally aware that ISIS used defendants’ platforms to recruit, raise funds, and spread propaganda in support of their terrorist activities. The [complaint] alleges that, despite “extensive media coverage” and legal and governmental pressure, defendants “continued to provide these resources and services to ISIS and its affiliates, refusing to actively identify ISIS’s Twitter, Facebook, and YouTube accounts, and only reviewing accounts reported by other social media users.” These allegations suggest the defendants, after years of media coverage and legal and government pressure concerning ISIS’s use of their platforms, were generally aware they were playing an important role in ISIS’s terrorism enterprise by providing access to their platforms and not taking aggressive measures to restrict ISIS-affiliated content . . . .

Plaintiffs adequately allege that defendants knowingly assisted ISIS. Specifically, the [complaint] alleges that ISIS depends on Twitter, Facebook, and YouTube to recruit individuals to join ISIS, to promote its terrorist agenda, to solicit donations, to threaten and intimidate civilian populations, and to inspire violence and other terrorist activities…Plaintiffs’ complaint alleges that each defendant has been aware of ISIS’s use of their respective social media platforms for many years—through media reports, statements from U.S. government officials, and threatened lawsuits—but have refused to take meaningful steps to prevent that use. The [complaint] further alleges that Google shared revenue with ISIS by reviewing and approving ISIS’s YouTube videos for monetization through the AdSense program. Taken as true, these allegations sufficiently allege that defendants’ assistance to ISIS was knowing…the social media platforms were essential to ISIS’s growth and expansion. . . .

Plaintiffs allege that, without the social media platforms, ISIS would have no means of radicalizing recruits beyond ISIS’s territorial borders. Before the era of social media, ISIS’s predecessors were limited to releasing short, low-quality videos on websites that could handle only limited traffic. According to the [complaint], ISIS recognized the power of defendants’ platforms, which were offered free of charge, and exploited them. ISIS formed its own media divisions and production companies aimed at producing highly stylized, professional-quality propaganda. The [complaint] further alleges that defendants’ social media platforms were instrumental in allowing ISIS to instill fear and terror in civilian populations. By using defendants’ platforms,…Plaintiffs allege that ISIS has expanded its reach and raised its profile beyond that of other terrorist groups. These are plausible allegations that the assistance provided by defendants’ social media platforms was integral to ISIS’s expansion, and to its success as a terrorist organization. [Emphasis added.]

The two cases, taken together, will give the Court an opportunity to clarify the ground rules for when the platforms can be sued for doing too little to censor content, or too much to promote it.


The other shoe is likely to drop soon on the Section 230(c)(2) front, specifically the power of states to regulate censorship of users by private social-media companies on their own platforms. In May, the Court sided temporarily with the challengers to the Texas social-media law, vacating a Fifth Circuit stay and sending the case back down; at the time, Justice Samuel Alito wrote (joined by Justices Clarence Thomas and Neil Gorsuch): “This application concerns issues of great importance that will plainly merit this Court’s review.” Since then, the Fifth Circuit has ruled again, with a divided panel upholding the Texas law, while the Eleventh Circuit struck down most of the Florida law. The two decisions are irreconcilable, and on September 21, Florida filed a petition for certiorari asking the Court to take the case. A response is due later this month. It is difficult to picture the Court ducking the case at this juncture, with a live circuit split on a clear and consequential issue of free speech and freedom of association.

Exit mobile version