Transhumanists, being mostly materialists, desperately yearn for something more; immortality, biotech that grants super-human powers and capacities, a life beyond being human.
Many look to machine artificial intelligence–AI–as the means of fulfilling their eschatology of post-human greatness. Here’s the prophesy: Technology will become so sophisticated it will reach a Big Bang point–known as”The Singularity”–after which transhumanism will become an unstoppable Moses leading humankind to the post-human Promised Land.
Me? I don’t think any of that will happen. But, as I have often written, I do worry that the movement’s values–and zeal–are distinctly Utopian. For example, up-and-comer transhumanist, Zoltan Istvan, has said that preventing transhumanist striving could justify war,
Contrary to the movement’s self-perception, it is also malodorously authoritarian. Case in point:Reason’s science reporter Ronald Bailey–a fan of transhumanism–interviewed movement guru Nick Bostrom about AI machines. Bostrom warns they could become quite dangerous because of their power and potential to function independently of our control.
I agree, which makes me wonder why transhumanists still want to turn them on!
Rather than reject artificial intelligence, Bostrom believes we should program AI super machines to act benignly in the common interest as the Singularity explodes. From, “Will Super Intelligent Machines Destroy Humanity?”
Rather than directly specifying a final goal, the Bostrom suggests that developers might instead instruct the new AI to “achieve that which we would have wished the AI to achieve if we had thought long and hard about it.”
This is a rudimentary version of Yudkowsky’s idea of coherent extrapolated volition, in which a seed AI is given the goal of trying to figure out what humanity—considered as a whole—would really want it to do. Bostrom thinks something like this might be what we need to prod a superintelligent AI into ushering in a human-friendly utopia….
He argues for establishing a worldwide AI research collaboration to prevent a frontrunner nation or group from trying to rush ahead of its rivals. And he urges researchers and their backers to commit to the common good principle: “Superintelligence should be developed only for the benefit of all humanity and in the service of widely shared ethical ideals.
Community “as a whole” doesn’t exist. Neither do “widely shared ethical ideals.”
That means that the ”decision” AI machines “make” would almost surely reflect the materialist utilitarianism of their creators–making them potential authoritarian masters of those among us who believe differently.
Or what if our AI machines overlords became “fundamentalist” in the belief that enforced moral conformity would most benefit mankind by eliminating the violence and divisions often sparked by cultural diversity?
Either way, you are looking at a potential dictatorship of machines.
Utopianism never ends well. Think French Revolution. Think Russian Revolution. Think the jihad that we now confront.
Transhumanism’s version would be no different. Indeed, Bostrom’s supposed AI corrective reflects that precise dystopian potential.