Editor’s Note: The below is an expanded version of a review that appears in the current issue of National Review. The book is A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going, by Michael Wooldridge.
I was attracted by the word “brief” — a “brief history.” I was also attracted by the subject, artificial intelligence. It is an important subject, I knew, and one about which I was largely ignorant. I’m not really sure I cared about AI, frankly — but I knew I should.
A “myth-busting guide,” says the front flap of the book cover. To be frank again: I was unsure of the myths needing busting.
So, a book intended for a general audience will be reviewed by a member of that audience.
The author of our brief history is Michael Wooldridge, professor of computer science at Oxford University, and the head of the department. These are impressive credentials, would you not agree?
About five years ago, we had an intern at National Review who was double-majoring at the University of Chicago in English and math. I said, “Max, are the math majors at Chicago so bright, they can barely function in life?” He said, “Those kids tend to be in CS.” I said (so help me), “What is CS?”
Computer science, of course.
Professor Wooldridge can surely function in life, and he is a wonderful communicator, a wonderful teacher. He writes very well — not just for a scientist, but for anybody. “When it comes to speaking and writing,” said William F. Buckley Jr., the founder of this magazine, “there is something in the water over there,” meaning England. “We might as well admit it.”
Just as important, Wooldridge loves his subject, and communicates that love. There is something endearing about another person’s love, even if you don’t share it. “Artificial intelligence is my life,” writes the professor. “I fell in love with AI as a student in the mid-1980s, and I remain passionate about it today.”
He goes on to “count the ways,” as another Englishman, or Englishwoman, said. (Elizabeth Barrett Browning.) For one thing, writes Wooldridge, “AI appeals to fundamental questions about the human condition and our status as Homo sapiens — what it means to be human, and whether humans are unique.” That is a very big “thing.”
As he presents his subject, Wooldridge does his best to keep it simple. He writes, for example, about “what are called Winograd schemas.” Those words “what are called” are a little kindness. He also cites a dictum of Stephen Hawking, author of a “brief history” that sold more than 25 million copies: A Brief History of Time. His dictum? Every equation you use in a book cuts its readership in half.
In his own book, Michael Wooldridge has four goals: to say what artificial intelligence is, and isn’t; to tell the story of the field (that “brief history”); to say what AI can do right now, and what it might do in the future; and “to have some fun.” He succeeds on all fronts.
His book is part history, part philosophical tract, and part “explainer,” to use a word that has arisen in journalism.
To tell the story, he begins at the beginning, which could be ancient Greece, he says, or James Watt (1736–1819), or someone or something else. He decides to begin at Alan Turing (1912–54). I knew Turing as the mathematical genius who helped the Allies win World War II, cracking codes at Bletchley Park; also as a man persecuted by the state for his homosexuality, and hounded to his death over the same. There’s a lot more to know, as Wooldridge details.
“He was, for all practical purposes, the inventor of the computer, and shortly after that, he largely invented the field of AI.”
As I continued to read the history, I thought, “Ah, we’re meeting their Babe Ruths.” There’s John McCarthy, who was born in Boston in 1927 to an Irish-immigrant father and a Jewish mother who had emigrated from Lithuania. After spending a career at Stanford, McCarthy died in 2011.
“With almost casual brilliance,” writes Wooldridge, this man “invented a range of concepts in computing that are now taken so much for granted that it is hard to imagine that they actually had to be invented.”
The stories of science and technology have a certain romance, as do stories of sports, politics, and — well, romance. Think of that best-selling book of 1995, Longitude, by Dava Sobel. Wooldridge tells the story of MYCIN, an AI system of the early 1970s that “became iconic,” he says.
Iconic? To whom? To AI cognoscenti, certainly — and now to anyone who has read A Brief History of Artificial Intelligence. (MYCIN aided in the diagnosis of blood disease.)
AI has had its ups and downs, its periods of boom and bust, as Wooldridge explains. Sometimes it is known as the province of charlatans and quacks. Sometimes it is the hottest, most wow-making thing.
Early in his book, Wooldridge has an interesting chart, showing tasks that computers have been made to do, or might be made to do. In the category of “solved, after a lot of effort” is the playing of chess. In the category of “real progress” are driverless cars.
In 1996, Deep Blue, the IBM supercomputer, played Garry Kasparov, the world chess champion. Kasparov won, four games to two. The next year, an improved Deep Blue beat Kasparov, three and a half games to two and a half. This was a thunderous event.
At the time, I was working for The Weekly Standard, in Washington. Our cover read, “Be Afraid. Be Very Afraid.” The piece, on Deep Blue, was by Charles Krauthammer (an expert chess player himself). In the same issue, we had a piece on cloning, by James Q. Wilson. The impetus for the piece, or its “hook”?
Scientists at the University of Edinburgh had cloned a sheep, whose name was Dolly. So, headlines around the world read, “Hello, Dolly!” But JQW’s piece, in the Standard, was called “The Paradox of Cloning.” It took a fairly relaxed — though not lackadaisical — view of the matter.
Our editor, William Kristol, thought both pieces were wrong. He was happy to publish them, of course. (Krauthammer and Wilson were the cream of the crop.) But he suspected that Deep Blue was not much of a worry, while cloning was.
Different minds — equally good — will have different concerns about different developments.
Wooldridge tells us about the quarrels, and brawls, within AI. Indeed, one section is headed “The Great Schism.” (On one side, “mainstream AI”; on the other, “machine learning.”) There is also a passage that begins, “In 1991, a young colleague returning from a large AI conference in Australia told me, wide-eyed with excitement, about a shouting match that had developed . . .”
From what I can tell — and you know I’m a mere onlooker, with no expertise — Wooldridge is fair to all sides in the various debates, whatever his own views (and he goes ahead and gives those). In this book, he is a historian, as advertised, in addition to a scientist.
Do you know the expression “name withheld to protect the guilty”? That is what Wooldridge does, in at least one instance. Have a listen:
At the beginning of the 1990s, I met one of the chief protagonists of the behavioral AI revolution — then something of a hero of mine. I was curious about what he really thought about the AI technologies that he was loudly rejecting . . . Did he really believe these had no place at all in the future of AI? “Of course not,” he answered. “But I won’t make a name for myself agreeing with the status quo.”
In this book, we learn about “weak AI” and “strong AI.” “Weak AI” — I’ll use my own words — is practical, thinkable stuff. “Strong AI” — I’ll use the author’s words — is “the idea of machines that are, like us, conscious, self-aware, truly autonomous beings.” Holy smokes.
Strong AI is a long way off, if it is coming at all. You never hear about strong AI at AI conferences, says Wooldridge, “except possibly late at night, in the bar.”
Throughout the book, Wooldridge also refers to strong AI as “the grand dream.” It might strike some as a nightmare — but so it is with all advances, if advances they are.
Driverless cars are here, like it or not. They arrived one day in 2005. “On that day,” writes Wooldridge, “driverless cars became a solved problem, in the same way that heavier-than-air powered flight became a solved problem at Kitty Hawk.” We were not jetting all over the place the week after Orville Wright stayed aloft for twelve seconds. But the problem was done. Over.
To speak personally, I used to shudder at the thought of driverless cars. Or at least I think I did. I can’t quite remember. Regardless, I am fairly relaxed about driverless cars, particularly after reading Professor Wooldridge on the subject. Will there be problems? No doubt. Are there problems now, with us human drivers? Well, more than a million people a year are killed in car accidents; about 50 million are injured. In any event, the driverless world is upon us.
“I am pretty confident,” writes Wooldridge, “that my grandchildren will regard the idea that their grandfather actually drove a car on his own with a mixture of horror and amusement.”
We mentioned Stephen Hawking, the late and best-selling physicist, above. In 2014, says Wooldridge, Hawking — who at the time was the most famous scientist in the world — publicly stated that he had a fear about artificial intelligence: that AI, in fact, represented an existential threat to humanity.
Wooldridge himself is not blasé about the dangers and dilemmas. No, he is wide awake to them, and he deals with them in his book, one by one.
What about the future of work? Will AI make all but a handful of us “redundant,” and thus unemployed? The power loom put a lot of people out of work, as did the tractor, as has the microprocessor.
How about war? If drones, rather than soldiers and pilots, carry out war (as they are increasingly doing), and weapons are “autonomous,” will governments be more likely to wage war?
What about a Terminator scenario? Will robots, cyborgs, and other scientific products run amok, overmastering us? The Arnold Schwarzenegger movie, The Terminator, came out in 1984. Wooldridge cites it, because people are always citing it to him, expressing their concerns over AI. (Steven Spielberg had a movie called, simply, “A.I.,” in 2001.)
Professor Wooldridge deals with the issues with a combination of expertise and common sense.
Personally, I may be most concerned about “deep fakes” in the news — the perfection of the manufacture of news. Videos that look absolutely real, for example, when they are absolutely false. I’m talking about genuine fake news, if you will, rather than news we don’t like, which we call “fake.”
“Fake news on social media is just beginning,” says Wooldridge.
When he writes about “diversity” and such — sex and race and all that — you might find that the book gets a little mushy. Even the grammar gets weird, as when a single person is “they.” This does not sound like Professor Wooldridge, frankly. But — it’s very modern.
As I said early on, A Brief History of Artificial Intelligence is intended for a general audience, and anyone can read it. But it is not . . . how to put it? Beach reading. Most readers, I imagine, would have to put in some work, as I did. Relief rarely came to me.
One instance in which it did was on page 134. Wooldridge quotes the opening paragraph of À la recherche du temps perdu (Proust) — at last, familiar territory! He then shows how Google Translate handles it. Such programs are not yet perfect. (Neither are human translators.) But the progress made on them is astounding.
As he writes, you get the feeling that Professor Wooldridge is talking to us as he would to small children — keeping it as simple as possible. And yet, his material is not the stuff of kindergarten. If we overheard him talking to his colleagues, their conversation might strike us as Chinese. (Then again, those colleagues may well be Chinese.)
Some well-known scientists and entrepreneurs have compared AI to electricity, and even — let’s go way back — to fire. Wooldridge is modest about his field, or realistic. But he is not dismissive of the power, even magic, of AI, which he loves so much. The following words are typical of him:
AI has started to make its presence felt in every aspect of our lives. . . . While some applications of AI will be very visible in the future, others will not. AI systems will be embedded throughout our world, in the same way that computers are today. And in the same way that computers and the World Wide Web changed our world, so too will AI.
How? Michael Wooldridge offers some surmises. Mainly, however, he says: Stay tuned.