Mind Matters Natural and Artificial Intelligence News and Analysis
Businessman with psychopathic behaviors

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

As any parent knows, soon after a child to talk, we incessantly hear “Why?” “Why?” “Why?”

It’s natural. Children don’t just want to know “facts”; they want to know what they mean and why they matter. And how things work.

“Why do I have to wash my hands every time?”

“If there are really germs, why can’t I see them?”

“Wow! But how does the microscope work?”

It’s part of our natural human curiosity to reach out for these connections.

And that is where artificial intelligence glaringly fails. Not only does AI fail to ask the question, it can’t answer it when asked.

Will Knight, Senior Editor at Wired, puts it like this:

Here’s a troubling fact. A self-driving car hurtling along the highway and weaving through traffic has less understanding of what might cause an accident than a child who’s just learning to walk.

Will Knight, “If AI’s So Smart, Why Can’t It Grasp Cause and Effect?” at Wired

This failure is not just an armchair theory. As Knight explains, researchers at MIT trained cutting-edge Deep Learning AI over a “simple virtual world filled with a few moving objects.” They then asked the system various questions. As expected, the system returned the correct answer to recognition questions, such as “What color is this object?”, about 90% of the time. But, when given more complex questions, those that entailed understanding cause and effect, performance fell to just 10%.

Failure to understand how things interact is not just an interesting research question. It is a dangerous limit underlying all AI systems.

In a recent piece, I (again) criticized the attempts of some researchers to ferret out moral decisions in a way that an AI can use. The inability of an AI to see options and understand the impact of decisions underlies my concern.

Humans do not mechanically apply rules when we make decisions. We evaluate the situation, we understand the effects of our choices, and then we choose. We all make bad moral decisions at times but we sometimes call a person who lacks the ability to understand the implications of moral choices a psychopath.

So here, then, is what we need to see: All AIs are psychopaths.

The researchers Knight describes have created an AI that blends Deep Learning and other techniques to improve its casual reasoning. However, “The approach requires more hand-built components than many machine learning algorithms, and Tenenbaum cautions that it’s brittle and won’t scale well.” (Knight)

That is, it breaks quickly and can’t do much.

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, we can do it with (simplified) virtual worlds, we can do it with statistical learning (such as with Deep Learning systems). We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

AI cannot replace us. More than that, it’s possibly dangerous without us. It has no mind. It has no conscience. On its own, it is a psychopath.


Further reflections from Brendon Dixon and others on AI and ethics:

The “Moral Machine” is bad news for AI ethics. Brendan Dixon: Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence
Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and HOPE the machine extracts the correct message…

Will self-driving cars change moral decision-making (Jay Richards) It’s time to separate science fact from science fiction about self-driving cars

There is no universal moral machine The “Moral Machine” project aimed at righteous self-driving cars revealed stark differences in global values.

and

Who assumes moral responsibility for self-driving cars? Can we discuss this before something happens and everyone is outsourcing the blame? (Jonathan Bartlett)


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

All AI’s Are Psychopaths