A number of thinkers claim that there is an intrinsic difference between the human mind and a machine. John Searle developed a famous argument known as the Chinese Room, which shows that understanding a language is different from looking up constituent characters or words in a reference table. Then there is Thomas Nagel’s famous essay, “What it is like to be a bat?” which argues there is something irreducible about conscious experience, often called qualia, which means that the experience cannot be captured mechanically. Hubert Dreyfus argues that machines cannot replicate human thought because ideas are analogs and are embodied within a broader context.
These arguments point to an inherent difference between the human mind and machines, a qualitative line. But where precisely this line lies is not pinned down. As a result, while the arguments are certainly appealing on an intuitive level and can be persuasive, they lack scientific grounding. They do not result in a hypothesis that can be examined and tested.
Is there a scientific line between mind and machine? That is, can we measure the difference between what minds and machines can do? Near the end of his life, computer pioneer John von Neumann (1903–1957) began to evaluate the differences. At that point in computer technology, there was a significant difference in processing efficiency.1
As AI increasingly takes on human tasks, we can update von Neumann’s project. All of the tasks that AI accomplishes require a certain amount of memory, computational power, and time. We have a good enough understanding of the human brain to measure the same quantities used for the same tasks. Thus, we can measure the difference between what minds and machines require to solve the same problem.
For example, using a rough estimate for processing, let’s say the DeepMind AlphaGo Zero AI takes 16 quintillion CPU cycles of training, that is, (a thousand raised to the power of six (1018), to exceed a human level of play in Go. On the other hand, let’s say a conscious human being can execute the equivalent of 50 bits per second and concentrates on Go and related skills for an entire lifetime. This effort requires 120 billion CPU cycles, which is less than the AI requirement. Thus, AlphaGo Zero would need to be 100 million times more efficient (a factor of about 100 million for improvement in CPU cycles) in order for AI to exceed human performance on an equivalent task.2
This is just the training part. The gameplay processing requirements between human and machine can also be compared. AlphaGo Zero requires four TPUs to make the decisions, which amounts to about 10 trillion operations per second, compared to the human’s estimated 50 operations per second. So there is a large disparity between human and AI efficiency in this case as well.
From this off-the-cuff analysis, we see there is a huge performance gap between AI and the human mind, even though the AI may outperform a human on the task. Thus, while it is correct to say AI can outperform humans when we are measuring only task accomplishment, it is comparing apples and oranges to say that AlphaGo Zero is outperforming the human mind. From this cursory analysis, there appears to be a stark quantitative line between the performance of minds and machines.
Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.
Also by Eric Holloway: Will Artificial Intelligence Design Artificial Super-Intelligence? And then turn us all into super-geniuses, as some AI researchers hope? No, and here’s why not
Human intelligence as a halting oracle
Artificial intelligence is impossible