Mind Matters Natural and Artificial Intelligence News and Analysis
Programming code abstract technology background of software deve
Programming code abstract technology background of software developer and Computer script

The Flawed Logic behind “Thinking” Computers, Part I

A program that is intelligent must do more than reproduce human behavior
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

I am publishing, in three parts and with his permission, an exchange with Querius, who is looking for answers as to whether computers can some day think like people. Here is the first part:

Your “Artificial intelligence must be possible! Really…?” is a fascinating piece. This is something that would ring out as heresy in the echo chamber I currently reside in when it comes to research in Artificial Intelligence.

So I’d like to see if I can break things down a bit and give you some counter points. My hope is that you can convert me, at worst you will have had some friendly sparring to look at your arguments from a different perspective. Nothing like a refining fire to remove impurities from the gold. Especially because, no doubt, you are in a minority. Human exceptionalism period is in a minority. The voices of AI pontificate the day machines will usurp us from our thrones. As if nihilism hasn’t gone far enough.

Querius

Great to see your motivation here. Most people I’ve talked with online dismiss anything outside the status quo on this issue, not caring much about either the truth or significance of the matter.

That said, I disagree that human exceptionalists are the minority.

I think they are only the minority among rich technologists, and even the rich technologists are human exceptionalists when it comes to themselves

Regardless, if artificial general intelligence (AGI) is our future I’ll accept that as well. As you know, I believe AGI is possible. I’ll address that point in the context of your post.

There you write, “To answer this question, let’s consider the trivial form of algorithmic intelligence that prints out a finite string of symbols or generates a physical description verbatim, regurgitates it if you will. We will call this program a “regurgence” for short. Is regurgence an acceptable example of algorithmic intelligence?”

If I were to look up the expression “algorithmic intelligence” in Google right now, what would I find? A Quora post and a couple vague definitions which are more or less based on opinions. I have seen algorithmic intelligence best described as behaviors done by a von Neumann machine. You feed it “representative input” and it gives you “representative output.” I’m having a semantic dispute with you because of what you defined as algorithmic intelligence. People, especially Singulatarians will accuse you of setting up a strawman.

For me to properly understand your definition of algorithmic intelligence I would need to see an example of what you mean by algorithmic intelligence. What program currently does regurgence in such a literal sense? Or fits your definition of algorithmic intelligence? My definition of algorithmic intelligence is a computer that does as you state in the second part of your argument.

Querius

I use the term “algorithmic intelligence” because any form of AI we can create on a computer is reducible to some Turing machine. So it is some kind of algorithm. I feel that this is more precise than “artificial intelligence.”

The most basic sort of algorithm that can mimic human action is one that reproduces a recording of human behavior. So, one example of algorithmic intelligence the following print statement:

print: “So, one example of algorithmic intelligence the following print statement.”

And the program prints the sentence.

So there you have it, an intelligent computer program!

Admittedly, this is a silly example but it makes the point that intelligence is more than just functionalism. A program that is intelligent must do more than reproduce human behavior.

Eric, you write, “Because the sort of intelligence we are interested in seems more like compression than regurgence, it is no longer a necessary truth.”

Right, so this is probably the most agreed-on definition of algorithmic intelligence, if there is one. Which it does not seem like there is. You start your article asking “Is algorithmic intelligence practically possible?” That makes no sense to me. If we were to start the article from your second question, “Why do we assume that algorithmic intelligence is theoretically possible?”, it would seem to be a better position. Unless I am wrong which, throughout our talk, I hope I am. Please clarify this part of your argument.

Querius

It’s a bit of bad wording on my part. If an algorithm that reproduces human behavior requires more storage space than exists in the universe, it is a practical impossibility that also demonstrates the logical impossibility of artificial intelligence. That is, because human intelligence cannot be physically stored in an algorithm, then intelligence must be non-algorithmic.

“Why do we assume that algorithmic intelligence is theoretically possible?”

This is more interesting to me:

“Because the sort of intelligence we are interested in seems more like compression than regurgence, it is no longer a necessary truth. It is not necessarily true that all symbol strings have a shorter compressed representation. This is easy to see if we imagine the opposite: If all symbol strings do have a shorter representation, then so must their shorter representations. Thus, we’d end up concluding that all symbol strings can be represented by nothing, which is incoherent. Therefore, we conclude that only some symbol strings have a compressed representation. As a consequence, compression intelligence is only true if the physical effects of the human mind are compressible.”

“Of course, it seems fairly obvious that we can compress the physical effects of the human mind.”

What exactly are you trying to get at with this? “It is not necessarily true that all symbol strings have a shorter compressed representation.”

Example?

Querius

It’s a proof by contradiction. Let’s assume that all symbol strings (stick with 1 and 0 bitstrings for simplicity) can be losslessly compressed. So, that means that all bitstrings of length 20 can be compressed to shorter than length 20. And all bitstrings of length 19 can be compressed to shorter than 19. And so on all the way down to the bitstring of length zero.

Now, consider all the bitstrings of length 20. Since we are assuming they can all be compressed, then each bitstring of length 20 must compress down to a unique bitstring of length less than 20. There are 2^20 bitstrings of length 20, so if all of them are compressible, then there has to be at least 2^20 bitstrings of length 19 and less.

So, how many bitstrings of length 19 and less are there? Let’s start from the smallest bitstring, which is the empty bitstring.

– There is only one bitstring of length zero.
– There are only two bitstrings of length one.
– There are only four bitstrings of length two.

How many bitstrings of length two or less are there? We add all the above together, and get seven.

The general form of this progression is that there are (2^N)-1 bitstrings of length N-1 or less.

This means that there are (2^20)-1 bitstrings of length 19 or less.

However, we need 2^20 bitstrings of length 19 or less in order for all of the length 20 bitstrings to be compressible. So, this means at least one of the length 20 bitstrings is not compressible.

Now, it turns out that most of the bitstrings of length 20 are actually incompressible. This result comes from Kolmogorov complexity.

Part II: There is another way to prove a negative besides exhaustively enumerating the possibilities

Part III: No program can discover new mathematical truths outside the limits of its code

Note: “Querius” is a pseudonym

Also by Eric Holloway: Will artificial intelligence design artificial superintelligence?

Artificial intelligence is impossible

and

Human intelligence as a Halting Oracle


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

The Flawed Logic behind “Thinking” Computers, Part I