AI researcher and tech entrepreneur Erik J. Larson has just published a book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Harvard University Press, 2021). It debunks the “AI is taking over” claims from people as varied as futurist inventor Ray Kurzweil and the late Stephen Hawking — media love that sort of thing. We are less likely to hear from well-qualified people who say it’s nonsense. But now is our chance.
For example, iconic Silicon Valley entrepreneur Peter Thiel (think PayPal) offers an endorsement: “If you want to know about AI, read this book. For several reasons ― most of all because it shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.” Thiel’s own 2014 bestseller Zero to One reflected on the problem from an entrepreneur’s point of view.
Veteran science writer John Horgan offers, “Artificial intelligence has always inspired outlandish visions, but now Elon Musk and other authorities assure us that those sci-fi visions are about to become reality. Artificial intelligence is going to destroy us, save us, or at the very least radically transform us. In The Myth of Artificial Intelligence, Erik Larson exposes the vast gap between the actual science underlying AI and the dramatic claims being made for it. This is a timely, important, and even essential book.”
Unlike many science writers, Horgan is not afraid to confront what he sees as nonsense marketed as official science, hence his own controversial classic The End of Science (1996). Late last year, he chronicled, amusingly, in Scientific American how he became an “AI doubter” himself.
Information theorist William Dembski recalls moderating a panel discussion years ago that highlights the significance of Larson’s book. It does a much better job of exposing the nonsense than even very able predecessors have done:
Back in 1998, I moderated a discussion at which Ray Kurzweil gave listeners a preview of his then forthcoming book The Age of Spiritual Machines, (1999) in which he described how machines were poised to match and then exceed human cognition, a theme he doubled down on in subsequent books (such as The Singularity Is Near (2005) and How To Create a Mind (2012) How to Create A Mind). For Kurzweil, it is inevitable that machines will match and then exceed us: Moore’s Law, guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines. Kurzweil’s respondents at the discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong AI view. Searle recycled his Chinese Room thought experiment to argue that computers don’t/can’t actually understand anything. Denton made an interesting argument about the complexity and richness of individual neurons and how inadequate is our understanding of them and how even more inadequate our ability is to realistically model them computationally. At the end of the discussion, however, Kurzweil’s overweening confidence in the glowing prospects for strong AI’s future were undiminished. And indeed, they remain undiminished to this day (I last saw Kurzweil at a Seattle tech conference in 2019 — age seemed to have mellowed his person but not his views).William A. Dembski, “Unseating the Inevitability Narrative” at Amazon Customer Reviews
Were the predecessors too polite? Not thorough enough? Was it just not the right time? Whatever, Dembski sees Larson’s book as “far and away the best refutation” of the AI overlords stuff we hear. And Dembski has followed the field for four decades:
In fact, I received an NSF graduate fellowship in the early 1980s to make a start at constructing an expert system for doing statistics… I witnessed in real time the shift from rule-based AI (common with expert systems) to the computational intelligence approach to AI (evolutionary computing, fuzzy sets, and neural nets) to what has now become big data and deep/machine learning. I saw the rule-based approach to AI peter out. I saw computational intelligence research, such as conducted by my colleague Robert J. Marks II, produce interesting solutions to well-defined problems, but without pretensions for creating artificial minds that would compete with human minds. And then I saw the machine learning approach take off, with its vast profits for big tech and the resulting hubris to think that technologies created to make money could also recreate the inventors of those technologies.William A. Dembski, “Unseating the Inevitability Narrative” at Amazon Customer Reviews
Recreate the inventors? Yes, as the AI apocalypse is indefinitely delayed, many promoters put their faith in achieving immortality (sort of) by uploading their minds to computers. The rest of us should keep an eye on Larson’s book.
It’s worth asking whether the whole “AI’s gonna take over; nothing you can do” myth mainly benefits Big Tech companies at the expense of the rest of society. If such conglomerates can create a sense of inevitability around their actions, people may be less critical, concluding that there is nothing Big Tech can’t do and nothing we can do about it. Not so fast.
Note: Here’s a podcast with Larson (pictured).
Next: No AI Overlords: What is Larson arguing and why does it matter?
You may also wish to read: Why Richard Dawkins thinks AI may replace us He likes the idea because it is consistent with his naturalist philosophy. Dawkins does not advance an argument for why “anything that a human brain can do can be replicated in silicon,” apart from the fact that he is “committed to the view that there’s nothing in our brains that violates the laws of physics.”