Feelings motivate living things to seek optimum states for survival, helping to ensure that behaviors maintain the necessary homeostatic balance. An intelligent machine with a sense of its own vulnerability should similarly act in a way that would minimize threats to its existence.
To perceive such threats, though, a robot must be designed to understand its own internal state.Tom Siegried, “A will to survive might take AI to the next level” at ScienceNews (November 10, 2019)
Physiologist J. Scott Turner, author of Purpose and Desire, explains how homeostasis enables a termite mound to have a sort of “collective mind” (a “giant crawling brain” ) that enables the mound’s preservation even though no individual termite is very smart.
Even though robots are not alive, Kingston Man and Antonio Damasio hope that new techniques in soft robotics and deep learning can “inspire artificial intelligence to more closely emulate the real thing” and endow them with a sense of well-being and empathy: “A robot capable of perceiving existential risks might learn to devise novel methods for its protection, instead of relying on preprogrammed solutions.” (ScienceNews)
But what are the existential risks anyway for something that exists only as an artifact? How much do we really know about how a sense of well-being or empathy come to exist in humans? But the project has certainly attracted interest: At Nova, we read a confident prediction:
Just like emotional intelligence is no longer being overlooked, encouraging socialness can be made a priority in robots. Depending on the needs or desires of users, interactions with robots might end up combining aspects of those we have with teachers, companion animals, and friends. While the end result might not be quite the same as human friendships or partnerships, it could still be fulfilling.
“It’s a different type of relationship that opens people up to interacting in a different way,” Breazeal says.
To make this type of companionship work, robots would need some kind of understanding of what makes humans tick… Jackie Snow, “This time, with feeling: Robots with emotional intelligence are on the way. Are we ready for them?” at NOVA (July 17, 2019)
Given that we have no practical idea what human consciousness even is, we can only program into robots a pretense of experiencing or responding to it.
The discussion at NOVA focuses on using robots to assist people with cognitive deficits at daily living or learning. In other words, the goal is to automate repetitive reminder tasks and so forth.
A health care or education worker will doubtless find such tasks frustrating (to say nothing of the institutional expense incurred by hiring trained personnel for the purpose). But the fact that automation provides a partial solution shouldn’t result in a vast inflation of its significance. The machine still doesn’t care and doesn’t need to.
The basic idea, as with the chatbot, is to create the illusion of feelings without actual sentience or consciousness. But the world of chatbot has not been a happy one. If the robot Sophia was your neighbor, chances are, you’d be a bit nervous about her quest to become more like a human being and kill people. And some bad outcomes are serious: The sexbot may end up encouraging insensitive or violent behavior because the user is encouraged to feel that acting on such impulses is normal.
Of course, some people hate robots and attack them:
People can be really mean to robots. We humans have been known to behead them, punch them, and attack them with baseball bats. This abuse is happening all over the world, from Philadelphia to Osaka to Moscow.
That raises the question: Is it unethical to abuse a robot? Some researchers have been wrestling with that — and figuring out ways to make us empathize more with robots. Sigal Samuel, “Humans keep directing abuse — even racism — at robots” at Vox
Is it unethical to abuse a robot? Well, is it unethical to crush a keyboard? In the real world, in either case, the sufferers are the humans in the vicinity. Taking out rage on inanimate objects doesn’t address the underlying problem of senseless rage that can—at times—end in tragedy.
So how far have be come with giving robots feelings? Pretty far, in our own imagination. The goal is to program an ever better illusion of feeling into various AI apps so that we can more easily make ourselves believe that they are alive and sentient. At times, we may think a more powerful illusion is a good enough substitute for reality. As long as we aren’t fooling ourselves.