Few expected it. Last week the D.C. Circuit Court of Appeals, in a 2–1 decision, completely upheld the Federal Communications Commission’s 2015 order regulating the Internet under Title II of the 1934 Communications Act, an order commonly called “net neutrality.” Most analysts predicted that the FCC would at most get a partial win, but legal challenges asserting that the order violated administrative law, the Communications Act, and the First Amendment failed to convince two of the three judges that deference was unwarranted. The decision ratifies the FCC’s decades-long transformation from economic regulator to social regulator and, if not reversed, will do lasting damage to U.S. technology and to free speech.
Readers with passing knowledge of net neutrality may have heard that it means that Internet service providers must treat all Internet traffic the same. This notion of equal treatment, repeated in the first line of the court opinion, has unknown origins, makes no appearance in the rules, and is widely derided by network engineers as a fantasy. Many services transmitted on broadband lines would break with “equal” treatment.
As you might gather from the FCC’s two prior failed attempts at regulating the Internet and from the length of the final order, net neutrality is far more than a traffic-management requirement. In the words of Tim Wu, the law professor who coined the term, the Internet rules are about giving the agency the ability to shape “media policy, social policy, oversight of the political process, [and] issues of free speech.”
The court decision is a godsend for the New Deal agency that was created to oversee the telegraph industry, AT&T’s long-distance monopoly, and broadcast radio. The upheld rules give the FCC sweeping new authority to regulate Internet and Web companies.
Title II regulations, created to police the Ma Bell monopoly, transform the Internet from a virtually unregulated, private system of networks into a quasi-public utility subject to conflicting common-carrier precedents, bureaucratic designs, and interminable waiver proceedings.
Many rules and regulations kick in, including a selective ban on blocking Internet content and oversight of the competitive Internet interconnection market, but the so-called general-conduct standard swallows them all. This amorphous rule allows the FCC to prevent any practice by an Internet access provider that the FCC believes will “unreasonably disadvantage” an Internet user, application, or content provider. The FCC and net-neutrality advocates correctly recognize that if the agency can monitor and control the distributors of speech, they can shape culture and politics.
For students of the FCC and media, this looks familiar. The FCC is reluctant to engage in obvious censorship — Janet Jackson’s wardrobe malfunction notwithstanding — so, like censors throughout history, interest groups use the agency’s licensing and regulatory powers to control the distributors.
The most prominent example of abuse of regulations came in the 1960s, when the Democratic National Committee and its affiliates used the FCC’s Fairness Doctrine to drive conservatives out of TV and radio for a generation. But in recent years, broadcast media and print newspapers are losing influence to the Internet, television, and streaming video, and the new media had the potential to escape regulators’ scrutiny.
The competitive technology marketplace should be a cause for celebration for a communications and media regulator. Instead, a well-functioning market needed a manufactured crisis — in this case, illusory “neutrality violations” — for the agency to reassert power. Title II brings new media firmly inside the regulatory tent.
Until this expansion of power, the FCC — like other common-carrier regulators, including the Civil Aeronautics Board and the Interstate Commerce Commission — faced the real prospect of slowly winding down, as the AT&T monopoly fell apart and mass media virtually exploded with the Web and new technology. Laissez-faire in communications and media — which gained steam in the Carter and Reagan administrations — led to the deregulatory 1996 amendments to the Communications Act. Until the early 1990s, regulators treated television and telephone as natural monopolies, and most consumers faced high prices and no choice for cable TV and local phone service.
Broadband is shared by many users, and providers can’t offer the full multitude of services to all customers at acceptable quality and prices at all times.
The 1996 law broke down regulatory silos in communications, and the competitive upheaval since then is one of the great untold stories of deregulatory success. Local phone companies have lost more than 100 million subscribers since 2000 as consumers switch to cellular carriers or to phone service from their cable company. Cable-TV providers, likewise, have faced punishing satellite and phone-company competitors, and cable’s share of the subscription-TV market has fallen from 94 percent in 1996 to 53 percent today.
It’s important to understand how broadband Internet works. While it appears to function simply, in reality it is a complex network growing more differentiated every year. Broadband Internet access is a single pipe that can transmit a host of services and applications, including Facebook access, e-mail, phone, teleconferencing, gaming, streaming television, data backup, and more. But broadband is shared by many users, and providers can’t offer the full multitude of services to all customers at acceptable quality and prices at all times. So tradeoffs are made. Some are obvious, like giving an Internet-protocol phone call precedence over another user’s monthly operating-system update. Others are complex tradeoffs related to interconnection price, content costs, the protocols the applications use, predicted consumer demands, and available capacity.
Title II rules make the FCC the ultimate arbiter of which tradeoffs and business models are acceptable. Call it innovation by regulatory waiver. So when providers are unsure about whether a new technology or business model “unreasonably” harms some Internet constituency, they can submit those prospective plans to the Commission and pray for an affirmative (and timely) advisory opinion. These advisory opinions border on Kafkaesque. The FCC can decline the request for an opinion, can permit the innovation, or can require more information from the submitting party. These opaque determinations cannot be appealed, and affirmative decisions can be reversed at the agency’s whim.
If history is any guide, these Title II rules and obligations will drive out smaller ISPs that can’t afford to hire lawyers and lobbyists to interpret the neologisms and incantations that will pour forth from the FCC. The larger carriers, with hallways of attorneys watching the agency’s every move, will muddle through the complexity as they grow more sclerotic, and they may even grow a little larger and more profitable as weaker rivals throw in the towel. Internet and technology companies, used to Silicon Valley’s “move fast and break things” culture, will increasingly need to lawyer up and ask permission before experimenting with new technology that touches on data transmission.
Some Internet providers may initially fight or test the legal boundaries, but the FCC has ways of breaking defiant firms. The most alarming is that the agency is increasingly using license and transaction approvals to coerce various policies — like net-neutrality compliance, increasing the number of, say, public-affairs, Spanish-language, and children’s TV shows, and abandonment of editorial control of TV and radio channels — that it cannot, or will refuse to, enact via formal regulation. In the long run, Internet and technology companies, now FCC supplicants, will have to divert funds from new services and network design to fending off regulatory intrusions and negotiating with the Internet’s new zoning board.
— Brent Skorup is a research fellow in the Technology Policy Program at the Mercatus Center at George Mason University.