Mind Matters Natural and Artificial Intelligence News and Analysis
blue digital binary data on computer screen
blue digital binary data on computer screen. Close-up shallow DOF

The Flawed Logic behind “Thinking” Computers, Part II

There is another way to prove a negative besides exhaustively enumerating the possibilities
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I am publishing, in three parts and with his permission, an exchange with Querius, who is looking for answers as to whether computers can someday think like people. In the first part, we discussed why human thinking cannot be indefinitely compressed. Here is the second part:

Recapping for myself what I said in Part I and mulling it over: “If all symbol strings do have a shorter representation, then so must their shorter representations. Thus, we’d end up concluding that all symbol strings can be represented by nothing, which is incoherent.”

Wait, I’m getting lost.

“Therefore, we conclude that only some symbol strings have a compressed representation. As a consequence, compression intelligence is only true if the physical effects of the human mind are compressible.”

Right, but you just said only some can be compressed from the above example. So this nullifies that a human mind is compressible.

“Of course, it seems fairly obvious that we can compress the physical effects of the human mind.”

Okay now I’m confused. You seemed to have dedicated a chunk of writing saying that compression can only compress some symbol strings. Then you went on to say that the physical effects of the mind are compressible. I would love for you to explain this.

Again, a videotape illustrates that we can compress human action, although in a lossy format. So, even though algorithmic intelligence is now no longer a necessary truth, it still seems pretty plausible.

In fact, as long as the possibility of finding a compressed, computable representation of human action is open, then we should take it as the best hypothesis we have, and go from there. It is impossible to say definitively there is no compressed form without investigating all possibilities. And because it is not possible to test every single possible algorithm that might reproduce human action we must always assume there is an as yet unknown algorithm that fills the gap between what we observe and what we can compute. In other words, we must always assume there is an “algorithm of the gaps” because the only alternative is to just give up. And we never make scientific progress by giving up. Querius?

Eric, how did you come to this conclusion? Is it because actions were represented in a lossy format? Is this your definition of algorithmic intelligence? I’m assuming at this point you are talking about (using your semantics) compressive algorithmic intelligence. You are trying to articulate someone else’s point of view.

When I think of artificial intelligence, I think “strong AI.” But, to keep it simple, if you are saying that people are trying to tack on human-like algorithm after human-like algorithm to “fill the algorithm gap,” that is the case for narrow AI. So it sounds like you are using “compressive algorithmic intelligence: to define an AI that can do any human task but not as a general learning agent, only as a task agent. I feel as if you are not articulating clearly enough how people using general intelligence think. Nor are you discussing general intelligence yet. Your definition of algorithmic intelligence is not AI. They don’t care about “adding 5 million algorithms to a machine” in terms of enabling algorithmic intelligence to complete any human task.

We have yet to know for sure. Most proponents of strong AI are functionalist and they won’t be swayed by an “algorithm of gaps” argument because that is not the goal. They are looking to have a number of “generalized” algorithms inspired by the human brain the way that airplanes are inspired by birds. Such algorithms could learn any task.

They think like this: “Hunger is a program that runs and then it halts when the input “food” satisfies hunger’s condition. That was the hunger algorithm in a human (Again it is the more popular belief that the brain runs a series of algorithms throughout the day, some halt sooner (wakefulness to sleep, hunger to satisfaction) and some halt later (extreme example: life to death).”

They believe that every single human function is a type of algorithm that is administered by a higher order of algorithms. They are trying to find those higher order algorithms. You could argue those algorithms are irreducible which I am sure you will make that clear below but I feel like you’re missing some things in the setup for your haymaker.

Can we prove that algorithmic intelligence is impossible?

Querius

What I am saying here is that the artificial general intelligence (AGI) researchers have an “algorithm of the gaps” perspective. They think that algorithmic intelligence is the best idea because there is always the possibility of an undiscovered algorithm that can copy human intelligence. So, no matter how frequently AGI fails to achieve its goal, they think the algorithm is always out there somewhere.

This brings me to the question of whether we can prove that algorithmic intelligence is impossible.

I think I am at hour 2 of trying to sit with your writeup, trying to understand exactly what you are saying and then responding to it. It could be me but there are some things here that are hard to understand. But nonetheless, I venture into this work to find the truth.

Querius

You are quite the trooper! Your patience for the sake of truth is admirable. Now, let’s take a step back. There is another way to prove a negative besides exhaustively enumerating the possibilities.

Consider the math equation 3 + x = 1. Imagine that someone claims that there is a positive integer that can replace x. We’ll call this person a positive realist. Mathematicians have searched for years for a positive number that can replace x but have never found one.

Nevertheless, the positive realist claims that his theory has not been falsified and therefore cannot be ruled out. In fact, he goes onto say, all efforts must be dedicated to substantiating positive realism because negative numbers cannot physically exist. They cannot be investigated scientifically; an attempt to do so might open the door to non-physical existence.

Then, a nonconformist mathematician subtracts 3 from both sides of the equation and proves x = 1 – 3 = -2, proving that the positive realist is wrong. This short example shows that, besides exhaustive enumeration, we can also use direct proof based on known laws of the subject matter to prove a negative.

Why did you use this example?

Querius

The example illustrates the problem AGI proponents face in assuming that there is always an algorithm out there. They are like the positive realist whose commitment to only looking for certain answers prevents him from ever finding the real answer. The positivist thinks his view is the only one that makes sense and that it can only be experimentally falsified by checking all positive numbers, which is impossible. Similarly, the AGI thinks his view is the only one that makes sense and that it can only be falsified by checking all possible programs.

The nonconformist mathematician shows that we don’t have to run an infinite number of experiments to falsify the positivist position. Instead, the nonconformist falsifies it by proving a negative (literally!).

So, with AGI, if we can identify something algorithms cannot do and show that humans can do it, then we’ve falsified the AGI position without running an infinite number of experiments across all possible algorithms.

This means the AGI perspective is not the only scientifically valid hypothesis. Nonalgorithmic intelligence is also scientifically valid.

To be continued in Part III: Here’s The Flawed Logic behind “Thinking” Computers

See also: Part I: A program that is intelligent must do more than reproduce human behavior

and

Part III: No program can discover new mathematical truths outside the limits of its code

Note: “Querius” is a pseudonym

Also by Eric Holloway: Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

The Flawed Logic behind “Thinking” Computers, Part II