On Saturday, Nate Hochman argued in these pages that, in order to “secure a wider sphere of political liberty,” the time has come for a “narrowing” or “repealing” of Section 230 of the 1996 Communications Decency Act. Hochman is wrong on the merits, wrong on the detail, and wrong in his underlying implication, which is that the social-media companies he wishes to control are “state-sanctioned actor[s].”
At the outset of his essay, Hochman notes correctly that “Section 230 protects Internet platforms from being held liable for the content that individual users post on their forum, holding that ‘no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’” But he then immediately errs by drawing a link that does not exist. “In so doing,” he writes, “[Section 230] also allows those platforms to police and censor content with impunity — so long as they take those actions within the nebulous confines of ‘good faith.’”
That word — “allows” — forms the basis of Hochman’s argument. Indeed, he routinely implies that, absent Section 230, the “platforms” that have raised his ire would be unable to “police and censor content with impunity.” But that, of course, is nonsense. Contrary to popular insinuation, “the providers of interactive computer services” were “allowed” to decide what appeared on their systems before Section 230 existed, and they would be “allowed” to decide what appears on their systems if Section 230 went away. Why? Because, irrespective of the existence or the details of Section 230 — including the lawsuit-parrying language in section (c)(2), which Hochman repeatedly conflates with section (c)(1) — those providers are private organizations that are protected by the First Amendment. Of course Section 230 does not prevent private companies from moderating material “whether or not such material is constitutionally protected,” as Hochman complains; the Constitution binds the government, not private businesses. And the purpose of Section 230 is both simple and dull: To ensure that private businesses that provide “interactive computer services” cannot be sued or prosecuted if someone other than themselves writes something defamatory or illegal on their site without their prior knowledge. It is not to grant private institutions the rights they already possess.
To illustrate this, consider the website that you are currently reading. Since its inception in 1955, National Review has enjoyed the right to publish or decline to publish whatever the hell it wants — whether that be articles written by its staff, letters written to the editor, or comments submitted by its readers. Both online and offline, National Review had this right before Section 230 came into existence, and it would retain this right were Section 230 repealed. The sole effect of Section 230 on National Review is to ensure that if a third party writes something unsolicited in the comments section of an article or blog post, National Review cannot be held criminally or civilly liable in the same way as it could be if it had made a conscious decision to publish precisely the same set of words (this piece, for example). Were that protection to be taken away, it would not lead to a free-for-all at National Review, but to the exact opposite. Across the board, the people who run this website would retain the same capacity to moderate its contents as they did before. Now, though, they would have a far greater incentive to do so.
Echoing a common complaint, Hochman characterizes these rules as “a special, targeted set of protections that legislators saw fit to extend to one specific industry.” But the “one specific industry” here is not really “an industry,” so much as it is the entire Internet. Yes, Facebook is covered by the principles underpinning Section 230. So is the New York Times. So is Major League Baseball. So is Microsoft Azure. So is Spotify. And so is Emily’s Cat Blog. In reality, Section 230 isn’t “targeted” at all. Instead, it’s a categorical provision that sets in place a commonsensical principle: Websites can’t be held liable for words that they neither published themselves nor expressly permitted others to publish in their pages.
It is, of course, true that Congress was not obliged to establish such a rule. But, pace Hochman, there is nothing intrinsically “free market” about the prospect of “narrowing Section 230’s liability protections or even repealing the provision altogether.” Section 230 sets a liability standard within a civil system that is, by definition, superintended by the government. If we are to have that civil system — and, presumably, Hochman is not opposed to its existence per se — then we will have to have rules that govern it. My preference is for those rules to attach liability to the party who is responsible for the speech at issue. Hochman’s preference is to attach liability to the platform on which that speech was published — even if it had no foreknowledge whatsoever of said publication. Neither one of these options is more or less “free market” than the other — although it should be said that only one of them is likely to make online providers less willing to “police and censor” their users than they would be if they were on the hook for those users’ speech, and, funnily enough, it’s not the one on which Hochman seems so inexplicably keen.
Hochman continues his complaint by griping that Section 230:
. . . effectively outsources censorship to monopolistic private actors, affording companies like Facebook and Twitter a sweetheart deal: It deputizes them to act as arbiters of the public square without any of the constitutional strings that attach to state entities under the First Amendment.
To bolster this idea, Hochman points to the Supreme Court’s ruling in Norwood v. Harrison, which holds that “a state may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.”
Here, he flies wildly off the rails. The moderation decisions that are made by Twitter and Facebook and National Review and Johnny’s Bar and Emily’s Cat Blog may, indeed, be stupid. But they are those sites’ bad decisions, not the U.S. government’s. “Deputize,” “outsource,” “induce” — these words all mean something concrete, and that something is not “to act according to one’s own ideological star.” Donald Trump, you will likely recall, was expelled from both Twitter and Facebook while he was the president of the United States. It would be an extremely strange form of deputization that led to that outcome, n’est-ce pas?
Hochman insists that abolishing or reforming Section 230 would not be “inconsistent with the principle of limited government.” And, in a narrow sense, that is true. The antitrust issues and questions of common-carrier status that the big social-media companies raise, for instance, should be treated seriously — and separately — from the question of liability. And yet, hovering underneath Hochman’s essay there lies an extraordinarily statist implication that deserves to be dismissed with prejudice: The notion that if a government seeks in any way to codify or ease the rights and liabilities of definitionally private actors, that government becomes complicit in how those rights are subsequently used. Given their totalizing worldview, it would make sense for progressives to cast a lack of government action as de facto “deputization.” To a conservative however, the idea should be anathema.