Mind Matters Natural and Artificial Intelligence News and Analysis
3d-illustration-emotionen-als-freisteller-stockpack-adobe-stock.jpg
3D Illustration Emotionen als Freisteller
3D Illustration Emotionen als Freisteller

Can We Teach a Computer to Feel Things? A Dialogue…

Okay, There’s the computer’s side… and then there’s the dog’s side. Listen to both
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The dialogue got started because of a gifted computer nerd, Rosalind Picard, also a playwright (pictured), who decided to become an evangelical Christian in midlife (approx 2019). As she tells it, “a flat, black-and-white existence suddenly turned full-color and three-dimensional.”

The director of MIT’s Media Lab, she had also written a book in 2000 called Affective Computing which seems to suggest that one could somehow give emotions to machines. I asked Eric Holloway to help me figure that one out:

O’Leary: Emotions are based on actual well-being or suffering. How can something that is not alive have actual emotions? Don’t think of people here!; think of dogs. Dogs have emotions. When my computer is giving trouble, I certainly hope it’s not because the thing is upset with me.

Holloway: That part is less clear. She thinks there is a subjective and objective component to emotions, and the objective measurable aspect of emotions can be computerized so that computers can make decisions.

O’Leary: I still don’t get it. Emotions are unique to individuals. Reason may be objectified; emotion, not. You could agree with me about the evils of cruelty to animals (reason) but if someone poisoned my cat, I would grieve and you wouldn’t (emotion). How do we objectify that?

Holloway: She is objectifying the behavior, which she calls mechanisms of emotion. One of her examples is fear. When a robot senses danger, it changes its internal state to the “fear” state, which makes the robot think in terms of immediate objectives instead of long term objectives.

O’Leary: Okay, but that’s not the experience of fear. Not as the cat would understand it when he is being chased by dogs into a tree. I don’t doubt one can do what she says with robots but why call it fear when no one is feeling it?

Holloway: I agree. It is just anthropomorphizing the algorithms, same
with the rest of AI.

O’Leary: In this case, I would argue, “feline-o”-morphizing them. I explicitly chose a cat or a dog because I want to leave reason out of it just now and concentrate on sentience, the ability to feel things. It’s part of the package of being a live cat. But in the real world, how would I know that a robot was sentient? And what difference would it make?

Holloway: A tough question, because a lot of what I think of as sentience can be copied with a robot, at least to some degree. A robot can have some limited goal seeking, notion of harm to self, self-maintenance, etc. But higher level sentience also is interested in impractical things, like having fun, some limited humor, some appreciation of beauty, affection, a sense of dignity, as if they are in touch with value that is not directly physical. This latter aspect is even harder to computerize.

O’Leary: But does the robot really have any such notions or is it a simulation of how an entity that has such notions might behave?

One could doubtless program a robot to flee from a pack of hounds but is that the same thing as a fat cat leaping an improbable two metres into a tree? The cat is actually experiencing something. We know that. How would I know that a robot was experiencing something?

Let me put it like this: Suppose someone could grab the neural correlates to a cat scrambling into a tree, pursued by hounds, and instantiate them in a robot. Would the robot actually be having that experience? Would it be aware, as the cat is, of that experience?

In the cat, the neural correlates are the expression of something he is experiencing. I make no claim for reason or the soul or anything like that but the fact that the cat is alive and sentient means that it is an actual experience, not a simulation.

Holloway: Right, I think that’s the fundamental problem. The robot is not having the experience, it is just electrical signals tripping switches, whereas the cat is responding directly to the experience. So there is something intrinsically different between real sentience and the robotic imitation. As a consequence, I also don’t think it’ll ever be possible to create a robot that even mimics sentience, because it is impossible to program experience, and the behaviors are reactions to the experience.

It is the same fundamental problem as with AI. Human intelligence is experiential. Reason, intuition, creativity, etc. progress as interactions with experiences. This can never be done with a computer, no matter how fancy the circuits. Circuits can never reproduce experience, and thus the robot can never respond to experience. Experiences transcend matter, since the same experience can occur with different matter: red apples and stoplights both give me the experience of redness. As a result, it is inherently impossible to replicate human intelligence with a computer, because a computer cannot replicate experience. As a corollary, this also means computers can never mimic human intelligence.

O’Leary: Bear with me one more time as I work this out. I am trying to imagine it in real life, the way things happen:

You are a dog’s human friend and he is quite sick. You take him to the vet. The vet uses up-to-date monitoring systems and the assistant helpfully explains to you what all those machine signals mean. So you sit there watching the machine and your presence in the room reassures the dog.

You are having an experience reading the vital signs. The dog is having quite a different experience living them. You have all of his data and none of his experience. The dog has none of his data and all of his experience.

Suppose you took all that data and instantiated it into a robot. Is the robot having your experience or the dog’s? Or neither, actually?

Is it even possible to be the subject of experience without being alive? How could simulation amount to the same thing? That’s the part I don’t understand.

Holloway: Yes, it’s a weird phenomenom in computer science, but we like to call things by names that they are not. Like “artificial intelligence” We like to hide the artificiality of the computers with naming them after real things. Just like children pretend their dolls are real people, we pretend our circuits and signals are really alive, thinking and feeling, and replacements for the real world. I guess it is the grown up version of make believe.

O’Leary: As long as we remember it’s all make believe.

Next: A look at Rosalind Picard’s interesting play about robots who get a clue.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

Can We Teach a Computer to Feel Things? A Dialogue…