Mind Matters Natural and Artificial Intelligence News and Analysis
artificial brain.jpg
Artificial Intelligence digital Brain future technology on motherboard computer. Binary data. Brain of AI. Futuristic Innovative technology in science concept

How WOULD We Know If an AI Is Conscious?

It might be more complicated than we think. A powerful zombie is still a zombie.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Neuroscientist Joel Frohlich (pictured) asks us to reflect on the “philosophical zombie.” That’s not the zombie of the late nite frites. It’s an entity that behaves outwardly in every respect like you and me but has no inner experience (think Stepford Wives). Philosopher David Chalmers originated the term in 1996, by way of illustrating why consciousness is a Hard Problem.

A powerful computer can crunch through many difficult jobs without any inner life or consciousness. But, Frohlich, who is editor in chief of the science communications website Knowing Neurons, asks, what if we weren’t sure? How would we test that?

Trying to determine if a powerful AI is conscious means getting past programming that might enable it to generate plausible autobabble. The machine need only sort through millions of examples of relevant sentences from the internet and scarf up stuff that passes its grammar checker in order to sound like it is saying something. But is it? Is anything going on inside?:

This is not a strictly academic matter—if Google’s DeepMind develops an AI that starts asking, say, why the color red feels like red and not something else, there are only a few possible explanations. Perhaps it heard the question from someone else. It’s possible, for example, that an AI might learn to ask questions about consciousness simply by reading papers about consciousness. It also could have been programmed to ask that question, like a character in a video game, or it could have burped the question out of random noise. Clearly, asking questions about consciousness does not prove anything per se. But could an AI zombie formulate such questions by itself, without hearing them from another source or belching them out from random outputs? To me, the answer is clearly no. If I’m right, then we should seriously consider that an AI might be conscious if it asks questions about subjective experience unprompted. Because we won’t know if it’s ethical to unplug such an AI without knowing if it’s conscious, we better start listening for such questions now.

Joel Frohlich, “Here’s How We’ll Know an AI Is Conscious” at Nautilus (March 29, 2021 /orig. March 2019 at Facts So Romantic)
Homemade mango ice cream or sorbet

Philosophers call subjective experiences qualia: As Frohlich puts it, “Our conscious experiences are composed of qualia, the subjective aspects of sensation—the redness of red, the sweetness of sweet. The qualia that compose conscious experiences are irreducible, incapable of being mapped onto anything else.” Put another way, what mango ice cream means to you is a unique experience; what it means to the person next to you is a different unique experience. Only conscious entities can have experiences because there needs to be a “self” that they happen to.

Frohlich proposes a test (if it ever comes to that). We would begin by isolating the AI from the internet, then ask some questions:

What might we ask a potential mind born of silicon? How the AI responds to questions like “What if my red is your blue?” or “Could there be a color greener than green?” should tell us a lot about its mental experiences, or lack thereof. An AI with visual experience might entertain the possibilities suggested by these questions, perhaps replying, “Yes, and I sometimes wonder if there might also exist a color that mixes the redness of red with the coolness of blue.” On the other hand, an AI lacking any visual qualia might respond with, “That is impossible, red, green, and blue each exist as different wavelengths.” Even if the AI attempts to play along or deceive us, answers like, “Interesting, and what if my red is your hamburger?” would show that it missed the point.

Joel Frohlich, “Here’s How We’ll Know an AI Is Conscious” at Nautilus (March 29, 2021 /orig. March 2019 at Facts So Romantic)

However, Frohlich has an even more searching question up his sleeve:

… the best question of all would likely be that of the hard problem itself: Why does consciousness even exist? Why do you experience qualia while processing input from the world around you? If this question makes any sense to the AI, then we’ve likely found artificial consciousness. But if the AI clearly doesn’t understand concepts such as “consciousness” and “qualia,” then evidence for an inner mental life is lacking.

Joel Frohlich, “Here’s How We’ll Know an AI Is Conscious” at Nautilus (March 29, 2021 /orig. March 2019 at Facts So Romantic)

As a matter of fact, no human being can come up with a definitive answer to those kinds of questions. But human beings do have the types of experiences that enable us to understand what the questions are about. And that’s what makes the difference.

His “qualia test” (?) sounds somewhat similar to Selmer Bringsjord’s Lovelace test, which asks whether the machine has departed from its programming to develop a unique new idea. Of course, the Lovelace test doesn’t strictly show that the machine is conscious. But it would mean that we should try to find out — possibly using Frohlich’s test. Frohlich thinks that it would be unethical to just unplug a computer that passed the test.

It’s safe to say that we don’t need to worry about this with any current or practically foreseeable computer.


You may also wish to read:

Thinking machines: Has the Lovelace test, been passed? Surprising results do not equate to creativity. Is there such a thing as machine creativity?

and

Why AI geniuses haven’t created true thinking machines. The problems have been hinting at themselves all along


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How WOULD We Know If an AI Is Conscious?