Mind Matters Natural and Artificial Intelligence News and Analysis
3d rendered illustration of karate dojo background. Karate school is out of focus to be used as a photographic backdrop.
3d rendered illustration of karate dojo background. Karate school is out of focus to be used as a photographic backdrop.
Adobe Stock licensed

What Did the Computer Learn in the Chinese Room? Nothing.

Computers don’t “understand” things and they can’t handle ambiguity, says Robert J. Marks
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent Mind Matters.ai podcast, “The Unexpected and the Myth of Creative Computers,” Larry L. Linenschmidt of the Hill Country Institute interviews Walter Bradley Center director Robert J. Marks
about why we mistakenly attribute understanding and creativity to computers. This is Part II of a discussion of AI between Marks and Linenschmidt (Part I is here). The interview was originally published by the Hill Country Institute and is reproduced with thanks. Here are the Podcast Notes. A partial transcript follows:

Partial transcript:

03:59 | Do computers understand things?

Robert J. Marks (right): You mentioned that Jay Richards talked in his book The Human Advantage: The Future of American Work in an Age of Smart Machines. about Searle’s Chinese Room thought experiment.

John Searle was a philosopher and he said that there is no way that a computer understands anything and he illustrated it with the Chinese Room. The basic idea was, you slip a little piece of paper on which something is written in Chinese into a little slot. Inside the room, somebody picked it up and looked at it and they wanted to translate it into something like, say, Portuguese. So there’s a big bunch of file cabinets in the room. The person took this little slip that had this Chinese writing on it And he did a pattern matching. He looked through it and he went through all the file cabinets and he finally found something that matched the little sheet of paper he had. And with that little sheet of paper was the translation into Portuguese.

So he took the little translation into Portuguese, he wrote it down, he refiled the translation, went to the door and slipped out the translation into Portuguese. Externally, someone might say, this guy knows Chinese, he knows Portuguese, this computer is really, really smart. But internally, the guy that was actually going to the file cabinets, doing the pattern matching in order to find out what the translation was had no idea what Chinese was, had no idea what Portuguese was. He was just following a bunch of instructions.

Larry L. Linenschmidt (left): So the computer processes. It turns out work product based on how it’s directed but in terms of understanding—as we think of understanding, the way you would expect one of your students to understand what you’re teaching, they don’t understand. They compute, they process data. Is that a fair way of putting it?

Robert J. Marks: Absolutely. There was a great columnist of twenty years ago that commented on Deep Blue beating Kasparov, who was the world champion of chess at the time. Deep Blue was trained to play chess. And he said,

Winning at chess, of course, is much harder than adding numbers. But when you think about it carefully, the idea that Deep Blue has a mind is absurd. How can an object that wants nothing, fears nothing, enjoys nothing, needs nothing and cares about nothing have a mind? It can win at chess, but not because it wants to. It isn’t happy when it wins or sad when it loses. What are its apres-match plans if it beats Kasparov? Is it hoping to take Deep Pink out for a night on the town? It doesn’t care about chess or anything else. It plays the game for the same reason a calculator adds or a toaster toasts: because it is a machine designed for that purpose.

David Gelernter, “How hard is chess?” at Time (June 24, 2001)

There’s no mind there. It’s just like Searle’s Chinese Room.

You also may be familiar with Watson, IBM Watson, beating the world champions at Jeopardy. If you think about it, that’s just a big Chinese Room! Except it isn’t a Chinese Room. You have all of Wikipedia and all of the internet available to you. And you’re given some sort of question on Jeopardy and you have to get the answer. And you look around and do some pattern matching and you link back the answer to the question. And so Watson beating the world champions at Jeopardy is exactly an example of a Chinese Room except that the room is a lot bigger because computers are a lot faster and can do a lot better things.

Larry L. Linenschmidt: Watson had, it would seem, a built-in advantage then by having infinite— maybe not infinite but virtually infinite information available to it to do those matches.

Robert J. Marks: Yes and, by the way, I read a book—and I highly recommend it—called The AI Delusion—by Gary Smith. And he pointed out, and I never knew this before, that the people that did IBM Watson were a little bit concerned because sometimes when you present things to the computer, there’s ambiguity.

08:03 | Computers and ambiguity

Robert J. Marks: I’ll go back now to Fred Flintstone to illustrate that. There’s a Fred Flintstone cartoon where he got his fingers stuck, glued, inside a bowling ball. And he told Barney Rubble, we gotta get this off, and he tried pulling it and everything. So Barney got a big hammer and Fred said, “When I nod my head, you hit it.” …

When you present things to a computer that don’t have context, like “When I nod my head, you hit it,” they don’t know how to respond. (By the way, Barney did hit Fred in the head.) So one of the things the Watson programmers did, according to Smith, was they said, look, we don’t want any questions asked in the Jeopardy contest that are confusing like this. The Jeopardy people said yeah, but you don’t want to fix the game by removing questions like that.

So they actually arrived at a compromise that they would go back and look at old questions that were from Jeopardy programs that hadn’t been asked yet. So this wouldn’t be a new topic when these people put together the questions for the contest between Watson and the other participants. So even there we see the inability of computers to do things that humans are able to do, at least in the case of Watson for that example.

Note: Here is both the earlier podcast, Part I: Gee-Whiz Tech and the accompanying story (with partial transcript), “AI Reality: Why we don’t think like computers.


You may also want to consider a pair of discussions between Larry L. Linenschmidt and business philosopher Jay Richards (partial transcripts):

Technology Kills Jobs, Creates New Ones. On this week’s podcast, Jay Richards looks at the way new jobs have historically grown from the turmoil around the deaths of obsolete ones.

and

Robot-Proofing Your Career, Peter Thiel’s Way Jay Richards and Larry L. Linenschmidt continue their discussion of what has changed—and what won’t change—when AI disrupts the workplace


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

What Did the Computer Learn in the Chinese Room? Nothing.