This is an excerpt from John Lennox‘s 2084: Artificial Intelligence and the Future of Humanity (Zondervan 2020) published with permission:
In April 2018 at the TED talks in Vancouver physicist and cosmologist Max
Tegmark, president of the Future of Life Institute at MIT, made this rather grandiose statement: “In creating AI [artificial intelligence], we’re birthing a new form of life with unlimited potential for good or ill.”
A study by Sir Nigel Shadbolt and Roger Hampson entitled The Digital Ape carries the subtitle How to Live (in Peace) with Smart Machines. They are optimistic that humans will still be in charge, provided we approach the process sensibly. But is this optimism justified? The director of Cambridge University’s Centre for the Study of Existential Risk said: “We live in a world that could become fraught with . . . hazards from the misuse of AI and we need to take ownership of the problem–because the risks are real.”
The ethical questions are urgent since AI is regarded by experts as a transformative technology in the same league as electricity. It would, however, make more sense to compare AI with nuclear energy than with electricity. Research into nuclear energy led to nuclear power stations, but it also led to a nuclear arms race that almost led the world to the brink of extinction.
AI creates problems of similar, or of even greater, magnitude.
The brilliant play Copenhagen by Michael Frayn explores the question of whether scientists should simply follow the mathematics and physics without regard to the consequences of what they are developing or whether they should have moral qualms about it. The context of the play is the research that led to nuclear fission. Exactly the same issues are raised by AI, except that AI is accessible by many more people than atomic physics and does not need very sophisticated and expensive facilities.
You cannot build a nuclear bomb in your bedroom, but you can hack your way around the world and cause substantial damage. We need to stop and ask: What is the truth behind claims like those of Tegmark? Are they perhaps exaggerated speculation that goes far beyond what scientific research has actually shown? There may well be some validity in the observation that the amount of unjustified speculation claimed for AI is in inverse proportion to the amount of actual hands-on work in AI that the claimant has done. For it would seem that those scientists who actually build AI systems tend to be more cautious in their predictions about the potential of AI than those who do not.
There is also the question of what worldview is driving all of this. What are the assumptions that are being made? Are they in the interests of all of us or simply of an elite few who wish to dominate for their own purposes?
Also: Exclusive!: John Lennox answers our questions about AI in 2084. He doubts that AI, now or then, will out-think humans. Our real worry is how it will be used.