I don’t understand the appeal of artificial intelligence, whereby somehow we would construct and program a computer to become an independent “thinker” by becoming self-programming. One doesn’t have to worry about an apocalypse out of The Terminator movies to recognize the peril.
But my real concern with the AI agenda involves the anti humanism that underlies so much of the philosophical thinking within the field. When transhumanists insist that AI machines should be considered “persons,” entitled to “rights,” they both undermine the vitality of what that term represents and reduce the uniqueness of man. When they claim that the human brain is just so much computer programming–supposedly with no “programmer”–they reduce us to mere function and undercut human exceptionalism.
The Wall Street Journal’s Matt Ridley, borrowing a riff from transhumanist advocate Ray Kurzweil’s new book on AI, appears to go there. From his “Why You Should Bet Big on Bionic Brains:”
For a start, the brain is built from a relatively small and simple body of information—the 25 million bytes of the genome. The complexity comes from ordered growth and elaboration. Second, the brain contains massive redundancy, with certain kinds of basic pattern-recognizing circuits repeated maybe 300 million times in different brain regions. Third, as Van Wedeen of Harvard Medical School and colleagues found in a recent study, much of the brain has a horizontal grid of fibers running at right angles, connecting vertically: a bit like the streets and elevators of Manhattan.
Moreover, the design of artificial intelligence systems has been converging with the way brains developed. Using evolutionary algorithms (a fancy form of trial and error), Mr. Kurzweil himself developed some of the successful speech-recognition software that we all take for granted.
We don’t know that is how the brain developed. But more importantly, we certainly don’t understand the mind–not the same thing as the brain–nor have the capacity to determine scientifically that our deepest selves might transcend the strictly corporeal. Moreover, if an AI computer seized control of its own programming and developed greater and greater data processing capacities, that doesn’t mean it would be truly sentient, just very sophisticated.
But apparently to Kurzweil–who also thinks we will become immortal after technology tips into ‘the Singularity”–we are just a matter of what programmers sometimes call, garbage-in/garbage out:
Mr. Kurzweil agrees with another innovator turned neuroscientist, Jeff Hawkins (the PalmPilot’s inventor), in believing that the human brain is basically a set of prediction machines that work by forecasting how a pattern of perceptions will develop. As we put together the pieces of, say, a visual image, information is flowing up (by the neural grid’s elevators) from basic pattern recognizers to higher and more abstract integrations, but also back down from the higher levels predicting what patterns will be found in missing parts of the image or as an image changes. Failed predictions—”surprises”—may be passed (via the neural grid’s streets) to higher levels in the neural hierarchy for conscious resolution. If this picture is broadly right, then replicating a brain isn’t impossible.
But we are not just gray matter in a skull processing data. We go deeper: We have free will. We experience profound emotions. We are moral agents. We sense the transcendent, or at least, think we do. We are often irrational. Moreover, we–unlike the most sophisticated computer–are alive, e.g., we are living integrated organisms. I think that matters morally.
So, by all means bet on the development of astonishingly complex computers. But let’s drop the meme pushed by transhumanists (not Ridley in this piece) that AI machines would be people too. The silicon ”brains” would not possess intrinsic value or dignity–even if partially constructed of cellular material–any more than the laptop on which I am writing this post.