Regulators Shouldn’t Overreact to the First Self-Driving Car Crash

A driver demonstrates the Tesla autopilot feature in Palo Alto, Calif. (Beck Diefenbach/Reuters)
One driver’s error doesn’t change the fundamental safety — and revolutionary potential — of Tesla’s technology.

Tesla’s much-lauded “autopilot” feature is considered by many to be a vision of the future: A properly equipped car that can manage its own speed in traffic, change lanes, steer, and even park without human guidance. The technology has been hailed as a boon to the environment, worker productivity, and traffic congestion, among other things.

Now, it faces its first big test, after a fatal crash in Florida.

The facts of the Florida crash confirm no substantial failure on the part of Tesla. Driver Joshua Brown, who explicitly opted in to the use of experimental technology and was clearly warned about how to conduct himself safely while using it, failed to follow those instructions. While Tesla recommends that drivers stay attentive and remain able to respond to the road at any time, it appears he was watching a Harry Potter movie when his car’s sensors failed to distinguish the side of a tractor trailer from a similarly colored sky and drove him to his death.

It is perfectly natural to want to prevent such unfortunate fatalities, but the instinct to do so ought not to be acted on without careful consideration.

There seem to be two types of attempts to intervene in the automated cars market. The first is a broad rejection of the technology: “That man would not have died if he hadn’t been lulled into a false sense of security by an immature technology; we ought to clamp down on the industry until we can be assured of safety for all.” Though this may sound like an obvious straw man for a more nuanced argument, it unfortunately is not.

The simple response to the Luddite position is that a flawed autopilot system on the road is safer than a world with no self-driving vehicles. Autopilot has produced one death in 130 million miles on the road, for a fatality rate much lower than that of human control: Americans average of one death per 94 million mile driven; the global rate is one death every 60 million miles. Elon Musk, CEO of Tesla, has estimated that universal application of autopilot technology would have saved more than half a million lives last year. If the underlying motive for those who would curtail the autopilot feature is a belief in the value of human life, why aren’t they agitating for the fastest possible expansion of the technology, bugs and all?

The second, apparently more reasonable type of intervention concedes that companies such as Tesla have produced outcomes superior to the status quo, but still sees a place for regulation to tackle easy problems that might marginally improve safety. Surely, in the wake of a tragic crash, there is something that the engineers missed? Some requirement that could have prevented the driver’s death?

Especially in a fast-paced industry such as automated transportation, ‘easy’ problems don’t exist, and well-meaning legislation can end up hurting more than it helps.

There are many reasons to suspect that, especially in a fast-paced industry such as automated transportation, “easy” problems don’t exist, and well-meaning legislation can end up hurting more than it helps. When the standard of quality in an industry advances rapidly with every passing year, politicians and regulators face serious hurdles to producing timely legislation. How long after a technology exists can someone in a state senate be reasonably expected to fully understand its strengths and weaknesses? How long after that before a bill is written, passed, and implemented?

Imagine if in 1972 legislators had seized on the nascent development of airbags as a good way to prevent automotive deaths. The airbag was a cutting edge technology, after all, and it looked like a promising public-health advancement. Well meaning, clever, experienced individuals could have spent all year crafting the perfect technical requirements for what to install in cars, passed a law, and forced automakers to comply with the standards. One year later, they would have found all that hard work undone as research showed that new three-point seatbelts were more effective and less likely to hurt occupants. Over the next decade, airbag technology would also advance enormously, rendering their original regulatory standard doubly outdated. Manufacturers would have been on the hook for costly adherence to airbag standards that they otherwise could have surpassed at less expense had they simply responded independently to the market and the advancement of the technology.

Even a more generous vision of the ability of regulators to respond to new technology still must grapple with the fact that they tend to be too conservative about allowing improvements onto the market quickly and painlessly. The burdens of the FDA’s enormous maze of red tape, its risk aversion, and its ossified incentive structure fall on real people who might otherwise see their lives improved or saved by the latest medical advances. Regulation can also include provisions that prevent future advancements outright, decapitating entire industries and holding back quality of life for decades, as in the case of the supersonic aircraft ban.

So, when the California state legislature tries to establish a reasonable distance that self-driving vehicles ought to maintain behind others, there is cause for concern. What’s wrong with the flawless-so-far following guidelines already used by autonomous vehicles? Do these politicians seriously intend to establish a list of following distances for every possible variation of car type, road type, speed, incline, and weather condition? Do they seriously think that their list would be more efficient, safe, or receptive to future technological advances than the status quo? Would their proposed regulation even have prevented a death caused by a failure to see the other car in the first place?

Self-driving cars are much safer than human-driven cars, and, until now, the industry that produces them has operated to great success without regulation. There is ample evidence that regulators don’t do well in fast-paced growth fields, and may even cause harm. Because any expansion of this technology is a good thing for human safety and quality of life, states should be very, very, careful about trying to direct its evolution.

— Austin Rose is an intern at National Review.

editor’s note: This article has been amended since its initial publication.

Austin Rose — Austin Rose is a student at Brown University and an intern at National Review.

Most Popular