The Corner

Supreme Court: Blocking People on Facebook Can Violate the First Amendment

People stand in front of a logo at Facebook’s headquarters in London, England, December 4, 2017. (Toby Melville/Reuters)

This is a bad result, and it is disappointing that it attracted no dissent across the Court’s ideological lines.

Sign in here to read more.

The Supreme Court this morning addressed two cases on the question of when the First Amendment is violated by public officials deleting comments or blocking commenters on their Facebook pages and Twitter accounts. The First Amendment doesn’t apply to private decisions, only to government action, so the legal question is when a social-media account on a private platform becomes a “public forum” where everyone has the constitutional right to be heard. The Court, in an opinion written by Justice Amy Coney Barrett, unanimously concluded that the First Amendment can restrict the block button and protect the public’s right to comment in some cases. This is a bad outcome, and one that Congress ought to step in and fix.

The issue has been an open one since the Second Circuit, in 2019, held that Donald Trump violated the free-speech rights of people he blocked on Twitter, on the theory that Trump’s use of Twitter to discuss government business made his account a public forum operated by the government. That premise seemed dubious after Twitter blocked Trump himself from using the account, making clear that access to Trump’s account was ultimately a private forum controlled by a private company, not by the government. The Supreme Court, however, ducked the question at the time, concluding in a case now captioned Biden v. Knight First Amendment Institute that the issue was moot in Trump’s case because he no longer had a Twitter account (or, for that matter, a government job).

The two current cases, Lindke v. Freed and O’Connor-Ratcliff v. Garnier, involve more prosaic public figures: the city manager of Port Huron, Mich., and two members of the school board in Poway, Calif. In Lindke, the city manager discussed public business on his personal Facebook page, which he had converted to a “public figure” page by virtue of its large following, and he blocked a critic of his Covid response. In O’Connor-Ratcliff, the school-board members started their Facebook pages to promote their campaigns, and one of them also had a Twitter account; they blocked harassing comments including “nearly identical comments on 42 separate posts on O’Connor-Ratcliff ’s Facebook page and 226 identical replies within a 10-minute span to every tweet on her Twitter feed.”

The Court sent both cases back to the lower courts, but not before issuing Barrett’s opinion for the Court in Lindke detailing the standards for determining when a social-media account is a public forum. Most cases involving the line between private and state action are about whether some private actor is effectively acting on behalf of the state — including a pending case on when the social-media platforms themselves are acting as cat’s paws for government suppression of speech. Most cases involving speech by individual public officials or employees, by contrast, involve when they can’t be punished by the government for what they say in their personal capacity, or when the government isn’t liable for what they do in that capacity. Barrett warned that the Court was wading into unfamiliar turf:

Today’s case . . . requires us to analyze whether a state official engaged in state action or functioned as a private citizen. This Court has had little occasion to consider how the state-action requirement applies in this circumstance. The question is difficult, especially in a case involving a state or local official who routinely interacts with the public. Such officials may look like they are always on the clock, making it tempting to characterize every encounter as part of the job. But the state-action doctrine avoids such broad-brush assumptions — for good reason. While public officials can act on behalf of the State, they are also private citizens with their own constitutional rights. . . .

Lindke cannot hang his hat on Freed’s status as a state employee. The distinction between private conduct and state action turns on substance, not labels: Private parties can act with the authority of the State, and state officials have private lives and their own constitutional rights.

In spite of that caution, as has become its pattern in writing opinions adding to the thick gloss of free-speech and state-action precedents, the Court devoted no attention to the original meaning of either the First Amendment, the 14th Amendment, or Section 1983, which creates the statutory basis for a lawsuit against constitutional violations undertaken “under color of” legal authority. Granted, it may be difficult to find an 18th- or 19th-century parallel to a city manager’s Facebook page — the Federalist Papers did not have a comments section — there is a lot of precedent already in these areas, and the Court is less likely to delve into historical materials when the parties don’t devote much effort to the question in their briefing. The briefs here got into the original meaning of the “under color of” language but really did not offer the Court assistance on the constitutional questions.

Building upon prior precedents, the Court ended up with a functional test, concluding that “a public official’s social-media activity constitutes state action under §1983 only if the official (1) possessed actual authority to speak on the State’s behalf, and (2) purported to exercise that authority when he spoke on social media. The appearance and function of the social-media activity are relevant at the second step, but they cannot make up for a lack of state authority at the first.” The Court cautioned that this “fact-intensive” inquiry may vary from one case to the next depending both on the nature of the social-media platform and on how the particular official used it:

Social media involves a variety of different and rapidly changing platforms, each with distinct features for speaking, viewing, and removing speech. . . . The nature of the technology matters to the state-action analysis. Freed performed two actions to which Lindke objected: He deleted Lindke’s comments and blocked him from commenting again. So far as deletion goes, the only relevant posts are those from which Lindke’s comments were removed. Blocking, however, is a different story. Because blocking operated on a page-wide basis, a court would have to consider whether Freed had engaged in state action with respect to any post on which Lindke wished to comment. The bluntness of Facebook’s blocking tool highlights the cost of a “mixed use” social-media account: If page-wide blocking is the only option, a public official might be unable to prevent someone from commenting on his personal posts without risking liability for also preventing comments on his official posts. . . . On some platforms, a blocked user might be unable even to see the blocker’s posts. . . . A public official who fails to keep personal posts in a clearly designated personal account therefore exposes himself to greater potential liability. [Emphasis added; footnote omitted.]

This is not a great standard in practice, because it gives lower courts few tools with which to dispose of expensive and protracted litigation that literally makes a federal case out of Facebook comment sections. It’s likely to chill public officials from being able to moderate their accounts — or allow comments on their pages at all.

Was that really necessary? For example, as the Court noted, state action is typically not found “when the challenged conduct entails functions and obligations in no way dependent on state authority.” Whether or not a public official speaks with public authority when posting on his or her own social-media page, unless the page is designated as a government forum (as the Court suggested it will be if the page is, say, the city’s designated official account rather than the city manager’s account), it’s hard to see how allowing or moderating comments is dependent on state authority.

The Court’s standard reveals the problem. The Court properly concluded, for example, that a post on some topic outside the official’s public responsibilities would not be traceable to state action, and also that the setting of a post may make it non-governmental:

Consider a hypothetical from the offline world. A school board president announces at a school board meeting that the board has lifted pandemic-era restrictions on public schools. The next evening, at a backyard barbecue with friends whose children attend public schools, he shares that the board has lifted the pandemic-era restrictions. The former is state action taken in his official capacity as school board president; the latter is private action taken in his personal capacity as a friend and neighbor. While the substance of the announcement is the same, the context — an official meeting versus a private event — differs. He invoked his official authority only when he acted as school board president.

But what if the commenter’s comment on the post is about the official’s public responsibilities? What if the commenter says something on a private topic on a post on a private topic, but is thereby blocked from commenting on public topics on posts on public topics? In that latter situation, the Court effectively holds that the block is state action even though it arises out of a private comment on a private post. The Court’s standard makes sense in determining when a social-media post may be attributable to the government, but that does little to sensibly answer the question of when the blocking of commenters or the deleting of comments is state action. And by shielding public officials in many cases where they delete comments, while exposing them to much wider liability when they block commenters, the Court defeats the tools most needed by social-media users to deal with the kind of pervasive harassment at issue in the O’Connor-Ratcliff case.

This is a bad result, and it is disappointing that it attracted no dissent across the Court’s ideological lines. It fails to explain as an original matter why comment moderation is state action in the first place. Public officials would be prudent to keep a clearer wall of separation between their public and private accounts, but that’s easier said than done. Little harm would be caused by allowing them a freer hand in blocking comments from the public, especially where there are official public accounts and offline spaces for commenters to receive public information and petition the government for grievances. Nothing in the Court’s opinion would preclude Congress from offering a statutory safe harbor from such lawsuits.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version