

California gubernatorial candidate Zoltan Istvan wants to get on AI’s good side.
We live in an era when activists of various stripes argue that, well, everything should have rights. Animals, nature, plants, the moon, rivers, AI/robots, you name it.
Now, in Newsweek, the transhumanism popularizer and California gubernatorial candidate Zoltan Istvan argues that we should give robots rights so they will show mercy on us. Seriously. From his article, “Why Giving Rights to Robots Might One Day Save Humans”:
The discussion about giving rights to artificial intelligences and robots has evolved around whether they deserve or are entitled to them. Juxtapositions of this with women’s suffrage and racial injustices are often brought up in philosophy departments like the University of Oxford, where I’m a graduate student.
This is the problem with all non-human-rights activists. They continually compare their favored supposed rights-bearers with human beings who were denied equality in the past. But those denials were wrong — and in some cases evil — because inherent equals were treated as if they were unequal.
As for AI/robots, it seems to me that for an entity to have any claim to inherent moral value, it must be a living organism. Robots and AI are mere machines. AI is a slave to its computer programming; it is not conscious and can never “feel,” as doing so requires a functioning body. An AI robot may be worth millions but has no greater moral value than a toaster and has no claim to “equality” whatsoever.
But, Wesley, why should “life” matter in granting rights?
Inanimate objects are different in kind from living organisms. They do not possess an existential state. We cannot “wrong” that which has no life. We cannot hurt, wound, torture, or kill what is not alive. We can only damage, vandalize, wreck, or destroy these objects. Nor can we nourish, uplift, heal, or succor the inanimate, but only repair, restore, refurbish, or replace them.
Moreover, organisms behave. Sheep and oysters relate to their environment in ways consistent with their inherent natures. In contrast, AI devices have no natures, only mechanistic designs.
Istvan told me years ago in an interview that he is a “theistcideist,” meaning that he believes that a superintelligence created all things and then committed suicide to give free will to the universe. That seems like a fancy way of saying he is a materialist. But he mystically anthropomorphizes AI when he asserts that AI could one day “become godlike,” writing, “It’s even likely this AI will be so smart it will know ways to extend human lifespans indefinitely, giving it powers similar to how people perceive a Judeo-Christian God.”
He then hopes that if we play our cards right, a future AI Zeus may have mercy on us:
Such circumstances create a philosophical case for a new, modern wager that helps guide humanity toward ensuring the respectful development of super-intelligent robots which might then evolve into an AI god. Benevolent human action could improve the odds humanity is protected instead of harmed by this type of future intelligence because the AI has gratitude for us as its compassionate creators. For example, an AI god may reward its makers and facilitators with superpowers or eternal happiness.
Mercy and compassion are actions that arise out of our emotional human nature and our empathetic relationships with one another, the natural world, and/or the divine. Gratitude is an emotion of thankfulness. How could an AI exhibit such virtues? As a machine, it is completely incapable of emotions or feelings. As for eternal happiness, that doesn’t exist on this side of the grave.
Ditto, for the negative side of the coin that Istvan posits:
Naturally, the opposite could happen too. A dark version of this idea, postulated as Roko’s basilisk, asks if an AI god would be vindictive because humans did not actively work to bring about its existence. If a super-intelligent AI doesn’t like us, it could choose to harm or wipe us out.
Again, AI would be incapable of either “liking” or “disliking” us.
There is no question that AI is a powerful tool that is going to reshape how society functions. But whatever consequences flow from those events will not be based on anything inherent in the AI. They will depend, for better or worse, on how we develop and direct the technology. As always, our futures remain in our own hands.