Recently, diplomat Henry Kissinger teamed up with Eric Schmidt (former CEO of Alphabet) and MIT dean Daniel Huttenlocher at The Atlantic to tell us how we must change, so that we may be made worthy of the promises of AI:
Attempts to halt it would cede the future to that element of humanity more courageous in facing the implications of its own inventiveness. Instead, we should accept that AI is bound to become increasingly sophisticated and ubiquitous, and ask ourselves: How will its evolution affect human perception, cognition, and interaction? What will be its impact on our culture and, in the end, our history?>Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher , “The Metamorphosis” at The Atlantic
Walter Bradley Center fellows aren’t really in a position to respond to demands for metamorphosis (total transformation); they could and did, however, respond to specific claims made in the article for winning chessbot AlphaZero:
Last December, the developers of AlphaZero published their explanation of the process by which the program mastered chess—a process, it turns out, that ignored human chess strategies developed over centuries and classic games from the past. Having been taught the rules of the game, AlphaZero trained itself entirely by self-play and, in less than 24 hours, became the best chess player in the world—better than grand masters and, until then, the most sophisticated chess-playing computer program in the world. It did so by playing like neither a grand master nor a preexisting program. It conceived and executed moves that both humans and human-trained machines found counterintuitive, if not simply wrong. The founder of the company that created AlphaZero called its performance “chess from another dimension” and proof that sophisticated AI “is no longer constrained by the limits of human knowledge.”
Now established chess experts are studying AlphaZero’s moves, hoping to incorporate its knowledge into their own play. These studies are practical, but larger philosophical questions also emerge. Among those that are currently unanswerable: How can we explain AlphaZero’s capacity to invent a new approach to chess on the basis of a very brief learning period? What was the reality it explored? Will AI lead to an as-yet-unimaginable expansion of familiar reality?Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher , “The Metamorphosis” at The Atlantic
Is AlphaZero really their master? Will it take over for all of us? Well, no. There is a fatal flaw in all these assumptions about its creative abilities, as computer engineer Eric Holloway explains:
The big problem with AI decision-making is that it requires the problem domain to be ergodic. This means that regardless of how far along the decision-maker is in the chain of decisions, it can still use the same rules to make good decisions.
However, as is obvious, the real world is not like that.
But, why is AI limited to ergodic problems? It is because AIs make decisions using decision trees based on an externally supplied scoring metric. They must follow the decisions all the way to the end to be sure which decisions maximize the score.
Decision trees grow exponentially with the number of decisions in a series. A series with 10 decisions and 2 choices at each decision will require an evaluation of 1024 nodes. If we double the decisions, then the AI will need to evaluate over a million nodes. And so on.
AI needs ergodic problems because the decision sequences can be very short and always apply.
This is also why peak AI is inevitable. If we double our computing power, we only add a single decision to the sequence. So, even if computation could increase exponentially forever, we will only be incrementally increasing the AI’s decision capability each time. And while going from 10 to 20 decision steps is significant, it will require at least the same amount of work to go from 1,000,000 to 1,000,010 decision steps, which is insignificant. So, if we want to avoid peak AI we need better than an exponential growth rate in computing.
Even so, we should stop and admire the sense of destiny: Kissinger et al. write, “But the phenomenon of a machine that assists—or possibly surpasses—humans in mental labor and helps to both predict and shape outcomes is unique in human history.”
Holloway points out, “We’ve been doing this since we were using piles of rocks to surpass our ability to hold and manipulate numbers in our heads.”
The new Hotness should be newer than that.
Computer analyst Jonathan Bartlett lets us in on what really gives powerful computer programs their amazing strength:
The explanation for AlphaZero’s chess success is simple: Chess has a large search space. It is largely unexplored. With a sufficiently large number of searches, it is possible to find new, unexplored areas to search. Simply memorizing the statistical most-likely place to move from a bazillion played moves is hardly intellectually exciting.
It reminds me of this SMBC comic, which demonstrates the pitfalls of statistical thinking.
Human play is usually both (a) effective and (b) understandable. It is possible that the computer found a mode of play that is effective but not understandable. Or, perhaps it is just a new, unexplored area of chess.
The nature of AlphaZero is itself limiting. AlphaZero only works because it can fully grasp the entire game and all its possibilities within preset rules. This is the foundation for everything. If AlphaZero can’t simulate the game 100% then it is worthless.
The authors also assume that opaque models of reality are always good. Not so. Oftentimes opaque models wind up modeling noise and not data.
The requirement of all AI is testing. In AlphaZero, the AI advanced quickly because it could test itself. In other systems, AIs advance because humans can test the outcomes themselves.
However, the article does develop a very important point towards the end. The problem with AI is that it can seem like a human companion. The authors write, “As a result, AI could induce humans to feel toward it emotions it is incapable of reciprocating.” That is certainly a concern. One of their big worries, which I share, is this:
…it is possible that in many parts of the world, from early childhood onward the primary sources of interaction and knowledge will be not parents, family members, friends, or teachers, but rather digital companions, whose constantly available interaction will yield both a learning bonanza and a privacy challenge. AI algorithms will help open new frontiers of knowledge, while at the same time narrowing information choices and enhancing the capacity to suppress new or challenging ideas.Henry A. Kissinger, Eric Schmidt, Daniel Huttenlocher , “The Metamorphosis” at The Atlantic
This is why I still publish in print books and teach from a whiteboard. While asking Wikipedia is easy and convenient, the danger of relying too much on automated forms of information is real.
Responding to the sense of doom and decision that the Atlantic piece exudes, Eric Holloway also offers, “If I were a supervillain, I’d create an ‘AI’ that I made everyone believe had some sort of superhuman intelligence. So everyone mindlessly accepted its pronouncements regardless of how ridiculous they sound: “It can beat the best humans in board games and video games, so it must be right.” Meanwhile, behind the scenes, I’d be feeding the AI the answers I wanted.”
Like many public figures, Kissinger worries publicly about an AI apocalypse But AI is not a person with evil intentions; it is only the machines people use. Our plot is still in search of Holloway’s supervillain, to bring about the AI Doomsday.
Other AI apocalypses are available. Choose one that suits you better if you wish:
Our AI overlords will save Earth, says prominent scientist. AlphaGo, the Go-playing computer program, is the start of telepathic superintelligences that will tackle climate change.
Tales of an invented god The most important characteristic of an AI cult is that its gods (Godbots?) will be created by the AI developers and not the other way around