A Huge Loss for Big Tech Regulation

(Pixabay)

Ron DeSantis’s anti–Big Tech law lost in court, and the sweeping First Amendment ruling may doom future efforts to regulate social media.

Sign in here to read more.

Ron DeSantis’s anti–Big Tech law lost in court, and the sweeping First Amendment ruling may doom future efforts to regulate social media.

O n Monday, a unanimous three-judge panel of the Atlanta-based U.S. Court of Appeals for the Eleventh Circuit struck down the bulk of Florida’s social-media bill. This is much more than a temporary setback for Ron DeSantis in trying to rein in “Big Tech.” The decision imposes a sweeping First Amendment barrier to governmental regulation of social-media content moderation. Given the decision’s national implications, the case may make its way to the Supreme Court, and the Court ought to hear it.

Florida’s Big Tech Law

Florida passed S.B. 7072 in May 2021. Its main aim was to prevent Twitter, Facebook, and other social-media platforms from banning political candidates, throttling stories published by news organizations, or engaging in politically biased moderation on the basis of ever-shifting and inconsistent standards — grievances derived from those companies’ banning Donald Trump and squelching the New York Post’s legitimate election-season reporting on Hunter Biden. It also requires explanations of social-media bans and includes some other requirements for disclosures and limits to how frequently platforms can change their rules.

The only parts of the bill to survive the Eleventh Circuit’s decision were some of the less onerous disclosure requirements (such as clarity about a platform’s rule changes), which the court found to be reasonable regulations of commercial speech aimed at informing consumers, and a provision allowing users 60 days to retrieve their data after being banned. Notably, the court struck down the requirement that platforms explain why they banned accounts, removed content, or engaged in “shadow bans” or manipulation of the visibility of some content or users.

The theory behind the bill is, first, that a handful of companies have disproportionate control over the access of most Americans to speak on social media or consume news or the views of news-makers there; second, that this control is exercised in an unfair and often arbitrary or biased way by the companies, whose content-moderation staffs all tend to share a common set of left-leaning politics; and third, that it is appropriate to subject social-media platforms to regulation that would be improper as applied to news organizations because platforms are in a fundamentally different business from publishers.

Each of these is a reasonable point as a general critique of Big Tech. Moreover, as I have noted before, Section 230 of the Communications Decency Act does not incorporate the common-sense publisher-platform distinction, and despite some creative legal efforts, the First Amendment does not restrict the platforms from censoring speech, so the legal status quo does nothing to address any of the critiques.

But just because there is a problem does not mean that it is possible to find a solution that is constitutional or avoids creating problems of its own. At the time, Charlie Cooke laid out the legal and philosophical case against it: “To force private entities to host or disseminate speech that they abhor is, ultimately, to force them to violate their conscience. It’s illegal in America — and it should be.” By contrast, if Elon Musk’s bid to buy Twitter goes through to completion, it will mark a dramatic blow for free-market forces correcting some of the problem.

The Eleventh Circuit opinion in NetChoice LLC v. Attorney General, State of Florida was written by Judge Kevin Newsom (a Trump appointee and former Alabama solicitor general) and joined by senior judges Ed Carnes (a George H. W. Bush appointee) and Gerald Tjoflat (a Ford appointee and the dean of the Eleventh Circuit). The case was argued only in late April, after DeSantis signed amendments to the social-media law that stripped the (entirely improper) exemption from the law for companies (meaning Disney) that operated theme parks in Florida. The law had already been stayed by a federal district judge in the interim.

Speech and the Right to Exclude

The opinion is a thoroughgoing rout for Big Tech regulation. It is also firmly grounded in the existing Supreme Court precedents on compelled speech, freedom of association, and the right to exclude. In Miami Herald Publishing Co. v. Tornillo (1974), the Court struck down a Florida law with a similar justification: In order to counteract media bias, Florida required newspapers to give equal time to politicians they criticized to respond in their pages. That violated the newspapers’ First Amendment rights to decide what to print. The same principle was applied in Turner Broadcasting Systems Inc. v. FCC (1994) to leave cable companies free to decide what channels to carry. As the Eleventh Circuit explained the rule:

There’s no legitimate . . . governmental interest in leveling the expressive playing field. Nor is there a substantial governmental interest in enabling users . . . to say whatever they want on privately owned platforms that would prefer to remove their posts. . . . Private actors have a First Amendment right to be “unfair”—which is to say, a right to have and express their own points of view.

In Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston (1995), the Court extended that principle to allow the organizers of Boston’s St. Patrick’s Day Parade to exclude gay groups. It didn’t matter, the Court said in Hurley, that the parade allowed lots of groups to march and express their own views and lacked a theme more specific than general Irish-American pride and vague Catholicism; because they are not the government, the parade organizers could still decide what groups and messages they were allowed to exclude, and that sends a message of its own.

Social conservatives benefited from the ruling in Hurley and should consider its principle a valuable one even when it produces results we don’t like. The Eleventh Circuit concluded that social-media companies engage in speech by their decisions about what content to exclude or promote on their platforms, even though the content is created by users:

The platforms invest significant time and resources into editing and organizing—the best word, we think, is curating—users’ posts into collections of content that they then disseminate to others. . . . When a platform removes or deprioritizes a user or post, it makes a judgment about whether and to what extent it will publish information to its users—a judgment rooted in the platform’s own views about the sorts of content and viewpoints that are valuable and appropriate for dissemination on its site.

The court buttressed this by treating the bland corporate-speak discussions of platform policies as “editorial judgment” and emphasized that different platforms choose different standards in order to appeal to different communities of users, contrasting Twitter’s robust political presence to the ban on political content on Roblox, a gaming network for kids. Moreover, political bias in content moderation is evidence that the social-media companies aren’t neutral technologies like the telegraph, which takes no interest in the content transmitted on its wires: “That observers perceive bias in platforms’ content-moderation decisions is compelling evidence that those decisions are indeed expressive.”

Big Tech critics may, at this point, note that if the companies are acknowledged to be putting a thumb on the scale to promote their own message, it may be worth it for Congress to reconsider Section 230(c)(1), which immunizes social-media platforms from liability for user-posted content, precisely on the theory that the comments section isn’t the website’s own speech. Certainly, the First Amendment doesn’t require Congress to preserve Section 230. Arguably, Congress might be able to condition a benefit such as Section 230 on platforms adhering to content-moderation rules that the government cannot impose directly. But even aside from the wisdom of Section 230 repeal as a matter of policy — given that it would doubtless result in less social-media speech as a whole — the NetChoice opinion’s analysis makes clear that even an indirect government effort to dictate content-moderation rules would have to be analyzed as a government condition on the fundamental First Amendment right of free speech.

Florida offered two arguments for its position. One, drawing from PruneYard Shopping Center v. Robins (1980), and Rumsfeld v. Forum for Academic & Institutional Rights Inc. (FAIR) (2006), was the argument that the government sometimes has a right to require general access to private property. PruneYard required a mall owner to allow leafleting on his property, but the court noted that subsequent cases have not applied PruneYard when the property owner objected to the content of particular speech.

FAIR upheld the Solomon Amendment, which required universities — as a condition of federal funds — to allow military recruiters. But the Court in FAIR noted that allowing access to a recruiter isn’t speech in the way that a politician’s newspaper column or even a marcher in a parade is. The expressive interest of the law schools that sued in FAIR was in letting people know that they disapproved of the military. They could do that by protesting. Not so here, said the Eleventh Circuit: “Social-media platforms, unlike law-school recruiting services, are in the business of disseminating curated collections of speech.” And while the presence or absence of military recruiters might not alert students to how a law school feels about the military, a Facebook user wouldn’t think “that the reason he rarely or never sees pornography on Facebook is that none of Facebook’s billions of users ever posts any. The more reasonable inference . . . is that the platform disapproves.” And in any event, at least for companies in the content-publishing business, “consumer confusion simply isn’t a prerequisite to First Amendment protection.”

Florida’s final argument was one suggested by Justice Clarence Thomas last year in a concurring opinion in Biden v. Knight First Amend. Inst. (2021), in which he raised the possibility that states could regulate social-media platforms as “common carriers” like railroads or public accommodations such as inns. There are many precedents for such regulation of private-property rights. Justice Thomas, however, did not fully flesh out the question of when the First Amendment would permit the state to regulate a provider of speech as a common carrier, other than noting that telegraph companies, in their time, had been regulated in ways (including Section 230–style immunity from defamation suits) that reflected their willingness to take all traffic. Moreover, he acknowledged that “this Court has been inconsistent about whether telegraphs were common carriers.”

The Eleventh Circuit noted three problems. One, to the extent that the government regulated businesses that traditionally acted as common carriers or public accommodations, social-media platforms had never behaved that way. Two, it was dubious whether states could pass laws converting them against their will into common carriers, given that the Supreme Court had rejected this for cable companies in Turner and allowed broadcast networks to be regulated in this fashion only because of the physical scarcity of the broadcast spectrum — not an issue on the Internet. On this point, with an eye toward who would make up a Supreme Court majority, Judge Newsom’s opinion pointedly cited opinions by Brett Kavanaugh when he was a D.C. Circuit judge that were skeptical of such treatment for social-media companies.

Third, even if one were to develop an originalist argument for common-carrier regulation, “at common law, even traditional common carriers like innkeepers were allowed to exclude drunks, criminals, diseased persons, and others who were obnoxious to others, and telegraph companies weren’t required to accept obscene, blasphemous, profane or indecent messages” (quotations and citations omitted). Even if one thinks that current First Amendment jurisprudence has unduly limited the government’s power to restrict forms of speech that were traditionally seen as indecent, a Founding-era conception of public decency would cut against forcing private speech-publishing businesses to take all comers.

All in all, the NetChoice opinion bulldozes every argument in favor of government power to dictate content-moderation terms to social-media companies. It does so with firm grounding in the Supreme Court’s precedents, to the point where Florida will have an uphill battle defending its law without asking the Supreme Court to throw out a well-developed body of First Amendment case law. Even more narrowly tailored laws such as the one passed in Texas would be hard to defend under this decision. Big Tech critics may try other legal avenues such as antitrust law or market solutions such as hostile takeovers, shareholder activism, or the creation of competing platforms. But unless they can persuade the Supreme Court to change how it reads the First Amendment, the legal fight to regulate content moderation is probably finished.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version