The Corner

Re: The Pragmatic Case Against the Death Penalty

As has been mentioned here, Noah Millman has a characteristically deep post about the justice of the death penalty that begins with this:

There’s been a running debate between myself, Alan Jacobs, and Jim Manzi in this space, touching down in a number of posts, over whether it matters (pragmatically) whether people believe human beings have a unique and transcendent value (whether we call that value “human dignity” or a “right to life” or a consequence of being “children of God” or what-you-will).

In case it’s not clear, I’ve been on the side of “it matters a lot whether or not we think humans have a unique and transcendent value”. I’ve spent some time trying to sort through my reactions to this specific post, and let me do so with a hypothetical.

Imagine that a large team of AI researchers builds several thousand small, battery-powered, wheeled, box-shaped robots. The researchers write software that governs the motion of these robots. This software has various rules like “If another robot gets within X feet, then move in direction Y at speed Z”. The numerical values of the parameters X, Y and Z are set uniquely for each robot using a pseudo-random number generator. The actual set of rules is very, very long, and no one programmer fully comprehends it. The only way to see how these robots will act is to put them together and watch what happens.

The researchers scatter them around an enclosed football field and activate them. Generally they start moving around. Because of the parameter values selected for its code, robot number 1837 begins to smash into other robots at a high rate of speed and destroy them.

In sub-case 1, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and ram it hard enough to destroy it. Then, these robots resume motion much like what it had been prior to this event.

In sub-case 2, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and surround it. They remain there indefinitely, which prevents robot 1837 from moving.

Is it a meaningful question to ask “under what conditions are the robots justified in executing sub-case 1 or sub-case 2”? Is it meaningful to ask whether robot 1837 has done anything “wrong”? Does morality, duty, fairness or anything like that describe the behavior of any of the robots? Has a decision been taken or will exercised by any of these robots?

If the answer to these questions is ‘no’, then what distinguishes humans, if we are merely complex machines, from these robots in a way that makes any of these concepts relevant to us?

And if we are just complex machines, then it seems to me that absent any transcendent value for human life, I would simply advocate anything from widespread use of the death penalty to outlawing it, simply based on what I perceived to be in my material self-interest. More precisely, that would be my reserve position. I would avocate whatver postion that I believed being seen publicly advocating would best serve my material self-interest. All the talk about duties, satisfaction justice and so on would, I think, sound to me like a bunch of chatter about unicorns and the tooth fairy.

Jim Manzi is CEO of Applied Predictive Technologies (APT), an applied artificial intelligence software company.
Exit mobile version