Mind Matters Natural and Artificial Intelligence News and Analysis
Water droplet on glossy surface of freshness orange and red apple
Apples and oranges in shadows

Why I Doubt That AI Can Match the Human Mind

Computers are exclusively theorem generators, while humans appear to be axiom generators
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A reader wrote recently to ask why, in the midst of numerous recent artificial intelligence (AI) advances, at Mind Matters we remain skeptical of the ability of AI to match human cognitive abilities. My response requires a bit more technical background than I usually need to provide but sometimes it’s unavoidable. (If you want to go directly to a less technical version, here’s a talk I gave recently on a similar subject.)

First, though, cognitive ability is only one aspect of intelligence. Those who think that artificial intelligence will eventually equal human intelligence face many hurdles, including problems of consciousness, emotion, etc. Here, we are looking at only one problem—cognitive ability.

Consider the difference between axioms and theorems. An axiom is a foundational truth, which cannot be proven within the system in which it operates. A theorem is a derivative truth, whose truth value we can know based on axioms. Computers are exclusively theorem generators, while humans appear to be axiom generators.

Computers are much better than humans at processing theorems—by several orders of magnitude. However, they are limited by the fact that they cannot establish axioms. They are entirely boxed into their own axiomatic rules.

You can see this in several aspects of computer science. The Halting Problem is probably the best known. In essence, you cannot create a computer program that will tell if another arbitrarily chosen program will ever finish. In fact, the problem is deeper than that: While the Halting Problem itself comes with a handy proof (which is why it is so often cited), we also find that absent outside information, computers have trouble telling if practically any program with loops will complete without directly running the program to completion. That is, I can program a computer to recognize certain traits of halters and/or non-halters. But without that programming, it cannot tell the difference. I have to add axioms to the program in order to process the information.

Given a set of axioms, computers can produce theorems very swiftly. But no increase in speed allows them to jump the theorem/axiom gap. AI research identifies the axioms needed to solve certain types of problems and then lets the computer loose to calculate theorems that depend on them.

AI research also creates more and more powerful axioms. That is, a previous generation may have started with axioms A, B, and C, but current generations have found more fundamental axioms, D, E, and F which reduce A, B, and C to theorems.

A question now appears: Is there a super-axiom that allows all of these axioms to be reduced to theorems? The answer is no. The same logic that shows that the Halting Problem can’t be solved can be used to show why the super-axiom does not exist. This distinction is essentially the same as the one between first-order and second-order logic.

Computers cannot process second-order logical statements in the same way as first-order logical statements. Some systems are described as second-order logic processors but they work is by picking out a subset of second order propositions, reducing them to first-order propositions, and then processing them as first-order logic. This is identical to the process I mentioned with respect to the Halting Problem. Humans can identify specific traits that will/will not halt and have the computer identify those traits but the computer itself cannot generate them on its own.

In fact, if I were to hazard a guess, I would say that the point where computers break down is infinity. The halting problem deals with identifying programs that will have infinite states and second-order logic deals with propositions that require an infinite number of comparisons. As I mentioned, after humans discover truths about them, we can encode these specific truths as new axioms into the system. But computers cannot discover the truths by themselves. For instance, try to imagine how a computer program (AI or otherwise) could establish the well-ordering property of the natural numbers without using any other second-order logic operation (or even try to do so!).


Additionally, all of the axioms considered so far are dependent axioms.  That is, given a system, these axioms are implied by the system even though they aren’t deducible from it.  However, there are other types of axioms which are independent and set the ground rules of the system to begin with.  These are even more general axioms that go outside the system altogether. An example of a system that is outside another system is non-Euclidian geometries. Non-Euclidian geometries operate by swapping out some of the fundamental Euclidian axioms for other axioms.

So, in all of these cases, we can see that humans are supplying axioms and computers are processing the axioms into theorems. The computer is never the supplier of the axiom.

Now, here’s a criticism someone might offer: Perhaps humans are much more limited than we realize. We have a fixed set of axioms and we are merely increasing our ability to express them. At some point, we are going to exhaust the number of axioms that we can actually process. If that happened, it is very possible that computers will achieve parity with humans in this realm. But it would also mean the end of science and mathematics.

However, I think that the history of science and mathematics suggests that humans will continue to generate axioms through time. The discoveries of mathematics through history are real discoveries of new axioms. A human grasp of infinity enables us to pull in axioms when they are needed (as I discuss in this paper, “Using Turing Oracles in Cognitive Models of Problem-Solving” (open access).

In summary, my primary reason for doubting that AI can match human intelligence is that the difference between mind and machine is a difference of kind, not of quantity. Understanding the distinction will help us exploit the abilities of each to their maximum potential. There are other reasons for doubting the future equivalence of AI and human intelligence, but this is the one I would consider first.

Jonathan Bartlett

Jonathan Bartlett is the Research and Education Director of the Blyth Institute.

Note: Many consider the theory of artificial intelligence a foregone conclusion due to materialism, and it is just up to the computer scientists to figure out the details. But, what if materialism is not the only game in town? Discover the exciting new scientific frontier of methodological holism in the new journal  Communications of the Blyth Institute.

Also by Jonathan Bartlett: Google Search: It’s Secret of Success Revealed

and

Did AI show that we are “a peaceful species” triggered by religion?

Also: Human intelligence as a Halting Oracle (Eric Holloway)


Why I Doubt That AI Can Match the Human Mind