What is it about this era that bases (seemingly) everything on “feelings,” as opposed to principles and rationality?
I just read a classic example of this contemporary affliction over at Psychology Today. A Ph.D. named Thomas Hills argues that we will accord human-style rights to robots because we will come to empathize with them.
First, he declares that the human rights expansion of the last two hundred years was due to empathy. From “Robots Will Have Rights”:
Rights are granted because enough people with rights care enough about those without them.
That caring can be misguided. It can be humble. It can change on a whim. But if we care, then it matters. That’s how the bots will get their rights.
This is why slavery was abolished, why US states ended coverture (the practice of granting a woman’s rights to her husband), and why the law eventually prevented us from selling our children.
No. If rights are merely based on feelings — which as Hills notes, are highly changeable — they can be here today and gone tomorrow. That would make them privileges, not rights.
Contrary Hills: Our understanding of human rights are predicated on the principle of human exceptionalism, that we each possess equal moral worth — objectively — simply and merely because we are human. “We hold these truths to be self evident, that all men are created equal, that they are endowed by their Creator with certain inalienable rights, among these are life, liberty, and the pursuit of happiness.” Jefferson’s epochal declaration had nothing to say about “feelings” being the foundation of rights.
Thus, enslaved African Americans were not liberated because of affection for them — which mostly did not exist in those racist times. Rather, abolition finally arrived when enough people in the North accepted that African Americans are human beings deserving of — for that reason alone — freedom (and I would add, also out of a desperate yearning to find a higher meaning out of the catastrophic carnage of the Civil War).
Back to the robots:
Robots will have rights because we will care about them enough to empathize (regardless of whether or not they can empathize with us). And because at least some of our robots will house algorithms built to adapt to our interests, we will eventually empathize. Because we like things we can empathize with.
When the first man drowns trying to save his robot love, we’ll look on in guilty horror, knowing that there but for the grace of God go I. That plot is already developing. A man recently let a woman drown(link is external) because he claimed he didn’t have anyone to give his phone too [sic].
Rights involve responsibility and moral accountability. Robots will never be moral agents because they cannot have free will and depend wholly on programming to determine their actions — note, I do not use the word, “behavior.”
As for the man letting the woman drown: People were righteously horrified because that idiot let a human being die over an inanimate object! The outright perversity of that decision outraged. The episode was not a harbinger of humans coming to love robots so much, we will turn them into an “us.” If you doubt me, look at the values promoted by the — ugh — sex-robot industry, which would be considered human trafficking and sex slavery if involving real flesh and blood women.
Hills closes on an even more ludicrous note:
Panpsychists argue that everything material has an element of consciousness. Even rocks. We can’t know.
What we can and eventually will do is take the robot’s perspective. We can see the world through its eyes. And the more it becomes like us, the more it can tell us its story.
The more of its stories we hear, the easier it will be to take its perspective.
Panpsychists (good grief!) aside, robots won’t have “stories.” They won’t have subjective “perspectives.” They will never be “us.” Even the most sophisticated AI algorithm-robots will always be mere tools — very expensive ones, to be sure — but of no more inherent moral value than your toaster.