The Agenda

Mark Thoma vs. Michael Mandel on Regulation

Mark Thoma writes:

Remove barriers to entry: The ability of new firms to enter industries is essential to achieving a robust, innovative, flexible economy. But entry hurts existing firms and they will attempt to block potential competition. For this reason, active enforcement of anti-trust laws – far more than we’ve seen in the recent years – is needed to ensure that the door is open for new ideas to be tried in the marketplace. In addition, though I don’t think overregulation is a problem generally, just the opposite, powerful firms often use regulations such as licensing laws to make it harder for new firms to enter. Barriers to entry, regulatory or otherwise, should be as low as possible.

In contrast, Michael Mandel has described how even well-designed regulations tend to raise compliance costs and deter new entrants.

In my paper on the Regulatory Improvement Commission, I argued that adding new regulations was like tossing small pebbles into a stream. Each pebble by itself would have very little effect on the flow of the stream. But throw in enough small pebbles and you can make a very effective dam.

Why does this happen? The answer is that each pebble by itself is harmless. But each pebble, by diverting the water into an ever-smaller area, creates a ‘negative externality’ that creates more turbulence and slows the water flow.

Similarly, apparently harmless regulations can create negative externalities that add up over time, by forcing companies to spending time and energy meeting the new requirements. That reduces business flexibility and hurts innovation and growth.

Many believe, like Thoma, that it is very easy to distinguish between good regulations and bad regulation, or “smart” regulations and “dumb” regulations, provided one has the right training and the right intentions. There are also those who believe that markets can be well understood by sufficiently intelligent and public-spirited antitrust officials, who really can fine-tune the decentralized trial-and-error process that is market competition and determine which mergers are acceptable and which are not, how the boundaries of a particular sector will evolve over time, whether or when low-cost competitors will emerge, etc. 

Recently, my friend Ashwin Parameswaran quoted the ecologists Holling and Meffe on “the pathology of natural resource management“:

[M]uch of present ecological theory uses the equilibrium definition of resilience, even though that definition reinforces the pathology of equilibrium-centered command and control. That is because much of that theory draws predominantly from traditions of deductive mathematical theory (Pimm 1984) in which simplified, untouched ecological systems are imagined, or from traditions of engineering in which the motive is to design systems with a single operating objective (Waide & Webster 1976; De Angelis et. al. 1980; O’Neill et al. 1986), or from small-scale quadrant experiments in nature (Tilman & Downing 1994) in which long-term, large-scale successional or episodic transformations are not of concern. That makes the mathematics more tractable, it accommodates the engineer’s goal to develop optimal designs, and it provides the ecologist with a rationale for utilizing manageable, small sized, and short-term experiments, all reasonable goals. But these traditional concepts and techniques make the world appear more simple, tractable, and manageable than it really is. They carry an implicit assumption that there is global stability – that there is only one equilibrium steady-state, or, if other operating states exist, they should be avoided with safeguards and regulatory controls. They transfer the command-and-control myopia of exploitive development to similarly myopic demands for environmental regulations and prohibitions.

Those who emphasize ecosystem resilience, on the other hand, come from traditions of applied mathematics and applied resource ecology at the scale of ecosystems, such as the dynamics and management of freshwater systems (Fiering 1982) forests (Clark et al. 19759, fisheries (Walters 1986) semiarid grasslands (Walker et al. 1969), and interacting populations in nature (Dublin et al. 1990; Sinclair et al. 1990). Because these studies are rooted in inductive rather than deductive theory formation and in experience with the effects of large-scale management disturbances, the reality of flips from one stable state to another cannot be avoided (Helling 1986). [Emphasis added]

To oversimplify, an emphasis on ecosystem resilience would suggest a turn away from crude anti-trust enforcement and towards a recognition that existing “sectors” have fuzzy boundaries, and that new competitors can come from entirely different “sectors.” That is, different populations will interact in the wild in complex, hard-to-anticipate ways. Siri might represent a long-run challenge to the dominance of Google, Zipcar might eventually finish off the reigning business models in the automotive industry, and so on. 

The key thing is to keep barriers to entry low by keeping compliance costs low, thus subjecting incumbents to the ever-present threat of competition. We want to prevent incumbents from using what Tim Carney has called “the overhead smash.” 

Reihan Salam is president of the Manhattan Institute and a contributing editor of National Review.
Exit mobile version