Politics & Policy

Liberals Want Your Car Keys

(Nathee Sanbooytho/Dreamstime)
Can computerized cars drive better than we can?

The cover story of Time’s March 7 issue makes “the increasingly compelling case for why you shouldn’t be allowed to drive,” claiming that computerized cars are (or, it is hoped, will be) safer drivers than humans, and so the logical thing is to ban humans from driving altogether. The plan is simple and familiar: First you use behavioral economics (higher taxes) to discourage a certain behavior — think of smoking — and once it’s gotten really unpopular, you ban it. Before you know it, you can’t smoke in Central Park.

In one of the sillier arguments I’ve encountered in print this decade, Time author Matt Vella notes that “there is no ‘right to drive’ enshrined in the U.S. Constitution.” Last time I checked, it was enshrined right next to the right to breathe and the right to wear socks.

Vella admits that weaning America off its “long-standing romance with its cars” will be a tough chore. But it is apparently a worthy task because Vella, like millions of other Americans, has been a victim of a human-driven-car accident. So he knows firsthand just how dangerous letting humans drive can be. His whining rhetoric is reminiscent of that of the anti-gun lobby, who similarly maintain that the only thing preventing us from saving lives is an irrational and outdated emotional attachment.

This is pat leftist thinking: “Individuals want X. Individuals are incapable of doing X efficiently by themselves. Therefore X should be provided for them by experts.” The experts are generally the government, often the academics, but never the individual. They know which doctor you should see, which operations your insurance should cover, which schools your kids should attend, and what the curricula should be. Think of then-candidate Obama’s infamous “Life of Julia”: Everything is taken care of for Julia, the perpetual child of leftist America, so that she isn’t bothered with the tedious business of making her own decisions, which would be inefficient and probably wrong.

Vella says that “contrarians” — apparently the automotive equivalent of global-warming “deniers” — often raise the vexed question of how it can be ethically acceptable to have a car make life-and-death choices: For example, to allow a car to choose between killing a dozen bystanders in the road or instead swerving and killing its lone occupant. Rather than attempting an answer, or even an examination of this serious subject, Vella writes that “these and plenty of other objections will provide ammunition as America’s libertarian id struggles to hold on to the keys.” No doubt the same way we struggle to hold on to God and guns.

It is worth mentioning, as Vella does not, that in such instances the car is not making a life-and-death decision — it is having its decision made for it by whoever wrote the code. Which raises the further question of who would be qualified to write such an algorithm. You’d have to start by assuming you could quantify the value of human life according to certain parameters — are two humans invariably “worth more” than one? How about three adults versus two children? Any freshman college debater will have learned how ridiculous this line of questioning quickly becomes. But Vella just expects that driverless cars will automatically be provided with a series of equations that have worked all this out.

Even before we get to the stage of licensing computer-cars to kill, there’s the more basic question of whether or not they’ll work. Given the almost touching trust tech advocates put in the all-powerful computer, they might be surprised to learn that some problems are incomputable, and that we can never figure them out no matter how much time and processing power we have. One such problem sounds surprisingly basic: Can we tell whether any given program will ever stop running? This is called the “Halting Problem,” and, in 1936, the mathematician Alan Turing, the inventor of computer science, proved that we cannot know whether a piece of software will reach its intended conclusion or get stuck somewhere in an infinite loop. (It was in this same paper that Turing gave us his famous mathematical definition of a computer and the legacy of the “Turing Machine”). The corollary is that since we cannot know whether a program will halt, we cannot know whether a program has been completely debugged — it is incomputable.

You might argue that, even though we can’t prove that a piece of software has been debugged, we will reach a point where it doesn’t matter because computers have become “good enough.” This may well be, but I would point out that my laptop still crashes occasionally — it’s just fortunate that it doesn’t kill anybody when it does.

The right to drive is a large part of our freedom of movement (traditionally, along with private ownership of firearms, among the first rights curtailed by dictators). The computer can make sure you always stick to the speed limit and never visit an off-limits area. Fine. But the skeptical conservative notes that in America we retain the right, in the final analysis and in emergency, to decide for ourselves when the rules need to be broken.

There is also a genuine emotional argument to be made — and we should reject the notion that an emotional argument is, ipso facto, worthless. Mayor Bill de Blasio couldn’t understand why New Yorkers laughed off his idea to replace Central Park’s famous horse-drawn carriages with electric buggies. To the rigorously unimaginative, the horse-drawn carriage and the buggy perform exactly the same role and are therefore identical except in terms of efficiency. But, to the normal person, the horse has an obvious romance and is part of another right not enshrined in the Constitution — the right to have a good time and enjoy life, when possible. Driving is fun. Liberals would be thrilled to death if we all drove the exact same type and model of car (or, rather, had it drive us). I think it might be a little boring. All those Toyota Priuses . . .

But if switching over to driverless cars will save lives — and there’s substantial evidence that it would — how can we have the gall to argue against it? Our final defense is that we believe there is some intangible value to having a human in charge, even if the human does a demonstrably worse job than a computer would. The way to be really safe would be to lock us all in bulletproof cases, where we couldn’t hurt ourselves or anyone else, where we’d be fed only healthy foods (no 20-ounce soft drinks, obviously), and in which we’d be ferried safely from one approved, safe location to another by reasonably obedient machines. It would be even safer if we could just plug our brains into something (we could call it “the matrix”) and never have to go anywhere at all. Some transhumanists are — no kidding — looking forward to this. But other people might consider that there is a certain price we pay in danger — even in death — for being human and living in a world run by humans. We should think twice before deciding we want to live in a world run by something else.

Exit mobile version