Why might genuine artificial intelligence be possible? Some proponents of strong AI, as it is called, offer philosophical arguments. For example, only the material world exists, and therefore human consciousness must be made of matter. Because, as far as we know all material phenomena can be modeled by a computer program, in theory, we should be able to reproduce the human mind with a computer program. Others believe that, even if the mind is immaterial, the only thing we can investigate scientifically is the physical world. Therefore, the only valid scientific perspective on the subject is that the mind is composed of matter. And then some do not like the religious implications of the mind as immaterial, and refuse to consider any possibility that might open the door to religion.
The weakness of all of these arguments is that they depend on an ideological commitment to explicit, unproven theories about the universe. What if, for example, the material world isn’t the only one? What if science can study some immaterial phenomena? What if we decide to ignore issues around religion while investigating the question? These arguments in favor of strong AI then cease to have much weight.
There is, however, one argument for strong AI that does not depend on any sort of ideological commitment. It depends only on things we know with certainty or near certainty:
As far as we know, the physical world is finite and discrete. Everything that is finite and discrete can be described by a finite string of symbols. Every finite string of symbols can, in turn, be generated by a computer program. In the most trivial case, the computer program prints out the string of symbols verbatim, and from there more sophisticated generating programs can be developed.
Now, leaving aside the question of whether the mind and consciousness are material or immaterial, it is clear that whatever the case may be, all the observable aspects of the mind intersect with the physical world and create a physical effect. Thus, an artificial intelligence computer program that reproduces the physical phenomena of the human mind is possible. We will call this kind of artificial intelligence “algorithmic intelligence.” In fact, based on the preceding premises, algorithmic intelligence as just defined is necessarily true.
Is algorithmic intelligence practically possible?
If we have established that algorithmic intelligence is necessarily true, what sense does it make to argue that it is impossible? To answer this question, let’s consider the trivial form of algorithmic intelligence that prints out a finite string of symbols or generates a physical description verbatim, regurgitates it if you will. We will call this program a “regurgence” for short. Is regurgence an acceptable example of algorithmic intelligence? Only if we are satisfied with calling a video recording or a book an “algorithmic intelligence.” Clearly, resurgence is missing something that we associate with intelligence.
Regurgence seems to be a very obese and static kind of algorithmic intelligence. We expect intelligence to be something much more elegant, something responsive and creative. In more technical terms, one signal of intelligence is that it compresses. It should be of shorter length than the symbol string it generates, a compression of the detectable physical effects. Practically speaking, regurgence requires a lot of storage space, and capturing all human actions over a lifetime could possibly exceed available storage capacity. So, while still theoretically possible, regurgence may be practically impossible as a form of intelligence.
Why do we assume that algorithmic intelligence is theoretically possible?
Because the sort of intelligence we are interested in seems more like compression than regurgence, it is no longer a necessary truth. It is not necessarily true that all symbol strings have a shorter compressed representation. This is easy to see if we imagine the opposite: If all symbol strings do have a shorter representation, then so must their shorter representations. Thus, we’d end up concluding that all symbol strings can be represented by nothing, which is incoherent. Therefore, we conclude that only some symbol strings have a compressed representation. As a consequence, compression intelligence is only true if the physical effects of the human mind are compressible.
Of course, it seems fairly obvious that we can compress the physical effects of the human mind. Again, a videotape illustrates that we can compress human action, although in a lossy format. So, even though algorithmic intelligence is now no longer a necessary truth, it still seems pretty plausible. In fact, as long as the possibility of finding a compressed, computable representation of human action is open, then we should take it as the best hypothesis we have, and go from there. It is impossible to say definitively there is no compressed form without investigating all possibilities. And because it is not possible to test every single possible algorithm that might reproduce human action we must always assume there is an as yet unknown algorithm that fills the gap between what we observe and what we can compute. In other words, we must always assume there is an “algorithm of the gaps” because the only alternative is to just give up. And we never make scientific progress by giving up.
Can we prove that algorithmic intelligence is impossible?
Now, let’s take a step back. There is another way to prove a negative besides exhaustively enumerating the possibilities. Consider the math equation 3 + x = 1. Imagine that someone claims that there is a positive integer that can replace x. We’ll call this person a positive realist. Mathematicians have searched for years for a positive number that can replace x but have never found one. Nevertheless, the positive realist claims that his theory has not been falsified and therefore cannot be ruled out. In fact, he goes onto say, all efforts must be dedicated to substantiating positive realism because negative numbers cannot physically exist. They cannot be investigated scientifically; an attempt to do so might open the door to non-physical existence.
Then, a nonconformist mathematician subtracts 3 from both sides of the equation and proves x = 1-3 = -2, proving that the positive realist is wrong. So, this short example shows that besides exhaustive enumeration, we can also use direct proof based on known laws of the subject matter to prove a negative.
Can we do the same with the question of human and algorithmic intelligence? Is there a way to demonstrate the impossibility of algorithmic intelligence without exhaustively enumerating all possible algorithms? Can we instead rely on first principles?
We know that there are limiting laws of computation. One of the most fundamental limits is the halting problem, discovered by Alan Turing.1 The halting problem states there is no program that can determine in general whether an arbitrary program halts or not. If the human mind surpasses the limit created by the halting problem, it is a “halting oracle.” Then by definition, the human mind is not computable.
A couple of counter-arguments to this line of thought are the following:
- It is impossible for anything to be a halting oracle.
- A halting oracle could exist but the human mind cannot be one because a halting oracle can solve many problems that humans cannot solve.
- Even if the human mind could be a halting oracle, it is impossible to detect whether it is or not.
Here are some responses:
- It is impossible for anything to be a halting oracle. A halting oracle is logically possible as an infinite lookup table that lists all finite programs and their halting status.2.
- A halting oracle could exist but the human mind cannot be one because a halting oracle can solve many problems that humans cannot solve. We can subtract a finite and even an infinite number of entries from the lookup table but it remains uncomputable.
- Even if the human mind could be a halting oracle, it is impossible to detect whether it is or not. The human mind is much more likely to be a halting oracle than an algorithmic intelligence3. With enough tests, we can reduce the probability otherwise to an arbitrarily small number. Likewise, because all programs require a certain amount of storage space, if we could demonstrate that the potential range of actions of a single mind requires a program that exceeds the storage capacity of the universe 4, then we know that the human mind cannot be a physical program.
Is there evidence from experience that humans are halting oracles? One piece of evidence accessible to software developers is the act of programming. In order to create high-quality software, programmers must select programs with known halting status with great reliability. Additionally, no one has yet figured out how to completely automate the programming task, although there have been numerous attempts. These observations are easy to make sense of if the human mind is a halting oracle, but much more difficult to explain if the mind is computational.5
On the theoretical side, another piece of evidence is Leonid Levin’s law of independence conservation 6. His law states that no combination of random and computational processing is expected to increase the mutual algorithmic information between two different objects. In less technical terms, Levin’s law implies that no program should be able to generate information that was not already written into the program. For example, no program can discover new mathematical truths outside the limits of its code. More practically, no program can apply mathematics to invent new mechanical devices that are not implicit in its code. On the other hand, the history of human progress shows many mathematical, scientific, and mechanical innovations that are hard to explain as implicit in the human brain or the environment. This creativity indicates that the human mind can access a capability that surpasses randomness and computation.
So, circling back to the original question of whether strong AI is possible, what have we learned?
If we define artificial intelligence as a very trivial form of algorithmic intelligence, which we have called regurgence, then it is necessarily true as a theoretical construct, although it may be practically impossible.
On the other hand, if we rely on a compression interpretation of intelligence, then it is no longer necessarily true. It may still not be practically possible, although it may seem the best hypothesis.
Then we examined whether the idea is falsifiable, and it turns out algorithmic intelligence can be falsified via the limitations of algorithms such as the halting problem.
In conclusion, if the human mind passes the limitations of algorithms, then the mind cannot be an algorithm, and artificial intelligence is impossible. A couple of pieces of evidence offered in this regard are the issues in software development and the history of human innovation. Not only is it valid to ask whether artificial intelligence is impossible but the argument can be pursued on a scientific basis with quantifiable, empirical evidence.
1 Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London mathematical society, 2(1), 230-265.
2 Eric Holloway, The Logical Possibility of Halting Oracles, Communications of the Blyth Institute Vol 1 No 1 (2019): Issue 1:1
3 Creativity and Machines, Communications of the Blyth InstituteVol 1 No 1 (2019): Issue 1:1
4 Aaronson, S. (2013). Why philosophers should care about computational complexity. Computability: Turing, Gödel, Church, and Beyond, 261-328.
5 Bartlett, J. (2014). Using turing oracles in cognitive models of problem-solving. Engineering and the ultimate, 99-122.
6 Levin, L. A. (1984). Randomness conservation inequalities; information and independence in mathematical theories. Information and Control, 61(1), 15-37.
Eric Holloway has a Ph.D. in Electrical & Computer Engineering from Baylor University. He is a current Captain in the United States Air Force where he served in the US and Afghanistan He is the co-editor of the book Naturalism and Its Alternatives in Scientific Methodologies. Dr. Holloway is an Associate Fellow of the Walter Bradley Center for Natural and Artificial Intelligence.
Note: Many consider the theory of artificial intelligence a foregone conclusion due to materialism, and it is just up to the computer scientists to figure out the details. But, what if materialism is not the only game in town? Discover the exciting new scientific frontier of methodological holism in the new journal Communications of the Blyth Institute.
Also by Eric Holloway: Will artificial intelligence design artificial superintelligence?
Human intelligence as a Halting Oracle
Also: Why I Doubt That AI Can Match the Human Mind Jonathan Bartlett: Computers are exclusively theorem generators, while humans appear to be axiom generators
How can consciousness be a material thing? Materialist philosophers face starkly limited choices in how to view consciousness (Denyse O’Leary)