Mind Matters Natural and Artificial Intelligence News and Analysis
sad robot
Toy Robot looking at itself in mirror
sad robot looking on the mirror

That Robot Is Not Self-Aware

The way the media cover AI, you'd almost think they had invented being hopelessly naïve
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

intact robotic arm/Columbia University

Chances are, you’ve already seen this headline or one of many like it: “Robot that thinks for itself from scratch brings forward rise the self-aware machines”

It’s from a story first published in The Telegraph (UK), then by Yahoo News and MSN, and then (of course) linked on Drudge. Henry Bodkin, “health and science correspondent” for The Telegraph, tells us, with no hint of caution, that “the rise of “self-aware” robots has come a major step closer following the invention of a machine capable of thinking for itself from scratch, scientists have said.” The first problem with both the headline and the story is confusion. They claim both that the robot under discussion is already self-aware and that it heralds the rise of “self-aware robots” in the future.

Take this bundle of confusion and exaggeration as a harbinger for the next twenty years of reporting on robotics and artificial intelligence. It’s likely to get worse from here.

The story is based on a paper in Science Robotics, “Task-agnostic self-modeling machines.” It was written by two Columbia University scientists, who used a new type of machine learning with a robot arm.

So what happened? “A robot modeled itself,” the authors put it, “without prior knowledge of physics or its shape and used the self-model to perform tasks and detect self-damage.” The basic idea is they developed a way for a robot arm with four degrees of freedom to construct a “self-model” through a series of initially random movements—rather than having the self-model supplied by programmers beforehand. And the self-model it constructed was accurate enough to allow it to perform various assigned tasks without, say, destroying itself.

They summarize the process in this way:

Step 1: The robot recorded action-sensation pairs.
Step 2: The robot used deep learning to create a self-model consistent with the data.
Step 3: The self-model could be used for internal planning of two separate tasks without any further physical experimentation.
Step 4: The robot morphology was abruptly changed to emulate damage.
Step 5: The robot adapted the self-model using new data.Step 6: Task execution resumed.

This is an important development in its own right. Still, as usual, it involves statistical algorithms applied to a robotic device in a new way. The robot still doesn’t understand anything. It’s not aware of the world around it, let alone aware of itself. There’s absolutely no reason to think it’s on its way to self-awareness.

As is typical in such cases, the scientists are fairly careful in the way they speak of their findings. But they do give science reporters a few bits to chomp on. The first is the usual metaphorical language common in the AI and robotics literature. They speak of the robot “understanding” and having “knowledge.”

Another tidbit comes near the beginning, when the authors say in passing: “Humans likely acquire their self-image early in life and adapt it continuously. However, most robots today cannot generate their own self-image.”

But the really big morsel comes in the final sentence. “Self-imaging,” they speculate, “will be key to allowing robots to move away from the confinements of so-called narrow AI toward more general abilities. We conjecture that this separation of self and task may have also been the evolutionary origin of self-awareness in humans.”

This is the only reference to self-awareness in the paper. But even here, the authors apply it to humans, not to the robot, and they qualify it as a conjecture. They leave it to readers—and reporters—to connect the dots along the following lines: “If we became self-aware in this way, then perhaps the same thing is happening with this robot arm.” That’s no doubt the inference they’re hoping for, but they don’t risk stating it explicitly.

It is surely this brief concluding conjecture that provided the thread that the science correspondent then used to knit together a story about the rise of self-aware robots. Without that thread, the paper from Science Robotics would not likely have made international news (and found its way to news headline site Drudge, which loves robot stories).

The scientists are hardly blameless here. One of them even provides a quote not found in the paper itself: “This is perhaps what a newborn child does in its crib, as it learns what it is.”

If this is how The Telegraph reports on a robotic arm, can you imagine what it will sound like when we get humanoid robots who seem to carry on conversations? We had best inoculate ourselves now against AI hype from science reporters while most of us still have enough self-awareness to realize what’s going on.

Jay Wesley Richards

Jay Wesley Richards

Jay Richards is a research assistant professor at the Busch School of Business and author, with Jonathan Witt, of The Hobbit Party: The Vision of Freedom that J.R.R. Tolkien Got and the West Forgot. His most recent book is The Human Advantage: The Future of American Work in an Age of Smart Machines. (2018)

Also by Jay Richards: A Short Argument Against the Materialist Account of the Mind

See also: Jay Richards Asks, Can Training for an AI Future Be Trusted to Bureaucrats?


That Robot Is Not Self-Aware