Regulatory Policy

The Legal Case against Big Tech

(BCFC/Getty Images)
The reasoning behind a case against Big Tech would rest on the distinction between hosting content and promoting it.

The Ethics & Public Policy Center’s Big Tech Symposium was held a few days ago. Many of the distinguished writers and academics there called the meeting “timely,” for good reason. The debate over tech conglomerates and their potentially outsized influence on our political landscape has intensified dramatically in the past few years. The complexity of the issues discussed may have left one wondering whether there was enough clarity to move forward. If there is one point of agreement, though, it’s that we need the courts to crack down on Big Tech’s overreaches. That should be a cause for both hope and worry.

The symposium held three roundtable discussions on matters of antitrust, “common carrier” designations, and Section 230 (the law that says platforms cannot be held accountable for hosted speech, in contrast to publishers). The most salient issue in modern politics is antitrust law. The panelists on antitrust laws, University of Miami law professor John Newman, Roger Alford of Notre Dame Law School, and Mark Jamison of AEI, could agree on two main points: (1) Trust-busting could provide bipartisan common ground, and (2) enforcing antitrust laws would be technically challenging.

Newman pointed out that tech companies have two consumer bases: advertisers and users. As a result, it’s difficult to measure monopolistic activity in zero-price markets, in which firms set the price of their goods or services at $0, with consumers typically trading their personal info in exchange for the product. It’s especially tough to assess monopoly power when harm must be measured across separate, indeed conflicting, consumer groups. Jamison added that predatory behavior is especially complicated in tech sectors because companies can cross-subsidize products, cutting prices in one area of their business while raising them in another.

As a rudimentary example, take Alphabet, which operates the entire Google suite including Search and Maps. It is theoretically possible for Alphabet to remove ads on Google Search while simultaneously increasing ad pricing on Maps. In such a scenario, Alphabet could undercut its competitors on Search while selling ads on Maps to recoup its losses. Advertisers would be hurt, but users might be better off.

Alford believes it’s important to retain the “consumer harm” standard, which requires regulators to show an identifiable harm to customers before they take action. However, he also believes that a viable way forward would be to expand on the standard’s scope so that it included consumer choice. However, the simplistic hypothetical above shows the difficulties in patrolling consumer harm. In that scenario, advertisers are harmed, to the benefit of consumers, which means that courts would be tasked with choosing between harms. Thus, while this is promising in theory, if the expansion is too broad, courts could find harm everywhere.

Despite the vigorous debate, conservatives agree on several points, starting with the observation that overly deferential judicial interpretations have allowed tech companies to push the boundaries of acceptable conduct.

Philip Hamburger, a law professor at Columbia Law School, discussed how the current interpretation of Section 230 grants partial immunity to hosting platforms, but courts currently don’t enforce the corresponding duty to host content neutrally. Similarly, Richard Epstein, one of the most prominent legal thinkers today, noted in the symposium that “the ‘for cause’ requirement associated with removal [of social-media content] has been stretched to the point that it now looks as though partisans on one side are suppressing their rivals.”

Both the application of Section 230 and the “for cause” requirement identified by Epstein would require more aggressive rulings from the bench. Supreme Court justice Clarence Thomas has signaled that he might support a more aggressive stance against Big Tech, but it’s unclear whether other justices would join him.

The legal reasoning behind a case against Big Tech would rest on the distinction between hosting content and promoting it. Twitter uses both algorithmic and human inputs to promote certain content on its platform. The “Trending” bar on the side is an example of this. By making a distinction between hosting and promoting, the courts could require Twitter to host users without mandating that Twitter promote all content equally.  Such a change would likely occur by expanding anti-discrimination laws to include “viewpoint discrimination.”

This middle way, requiring hosting of all content but not promotion, seems promising. But before we jump to make laws that require Twitter or Facebook to host particular politicians or viewpoints, we must be careful. Matthew Feeney of the Cato Institute was clear in his remarks at the symposium that there are serious risks to regulatory capture. The rules about promotion algorithms or hosting mandates will not be limited to the Big Tech companies.

It’s also possible that rule changes may not be necessary. Mark Jamison pointed out that the pace of tech innovation is reminiscent of Bell’s Law, which states that computer systems become outmoded each decade. Regulatory intervention may not be necessary if this holds true for Facebook, Amazon, and Google (as it did for MySpace and Geocities).

The Big Tech Symposium showcased a range of viewpoints about regulating Big Tech companies. While legislators may not be in the best position to crack down on these companies, courts could take the first step by being less deferential to Big Tech. Before we endorse judicial measures, though, we should remember the words of UCLA law professor Eugene Volokh at the symposium: “There is almost no problem so bad that government regulation can’t make it worse.”


The Latest