Mind Matters Natural and Artificial Intelligence News and Analysis

Search Resultstrolley problem

Streetcar in Toronto, Ontario, Canada

The “Moral Machine” Is Bad News for AI Ethics

Despite the recent claims of its defenders, there is no way we can outsource moral decision-making to an automated intelligence

Here’s the dilemma: The Moral Machine (the Trolley Problem, updated) feels necessary because the rules by which we order our lives are useless with automated vehicles. Laws embody principles that we apply. Machines have no mind by which to apply the rules. Instead researchers must train them with millions of examples and hope the machine extracts the correct message… 

Read More ›
business-ethics-stockpack-adobe-stock.jpg
Business Ethics

Artificial Ethics May Make Poor Choices

Whether or not AI can become powerful enough to follow its own rules is still an open question

We’ve all heard about computers that make poor ethical choices. One of the most memorable is HAL 9000 in the 1968 classic, 2001: A Space Odyssey. In the film, HAL kills four humans and attempts to kill a fifth. The concurrently written book elaborates on HAL’s murderous plans, explaining that they were due to HAL’s inability to properly make the correct ethical choice: lie to the humans or kill them (and, thus, no longer be forced to lie to them). Poor HAL 9000! If only people had developed a new field of academic inquiry in time to help him (or should we say, “it”?) make better fictional ethical choices! Putting aside Hollywood’s imaginary universes, the real need for the new Read More ›

Businessman with psychopathic behaviors

All AI’s Are Psychopaths

We can use them but we can’t trust them with moral decisions. They don’t care why

Building an AI entails moving parts of our intelligence into a machine. We can do that with rules, (simplified) virtual worlds, statistical learning… We’ll likely create other means as well. But, as long as “no one is home”—that is, the machines lack minds—gaps will remain and those gaps, without human oversight, can put us at risk.

Read More ›
The difference between right and wrong

Will Self-Driving Cars Change Moral Decision-Making?

It’s time to separate science fact from science fiction about self-driving cars

Irish playwright John Waters warns of a time when we might have to grant moral discretion to computer algorithms, just as Christians now grant to the all-knowing but often inscrutable decrees of God. Not likely.

Read More ›
End of the road. Precipice, indicated by signs. 3d render

There is no universal moral machine

The “Moral Machine” project aimed at righteous self-driving cars revealed stark differences in global values

Whatever the causes of cultural differences, Brendan Dixon thinks that the Moral Machine presents mere caricatures of moral problems anyway. “The program reduces everything to a question of who gets hurt. There are no shades of gray or degrees of hurt. It is, as is so often with computers, simply black or white, on or off. None of the details that make true moral decisions hard and interesting remain.”

Read More ›
Cafeteria tables
Cafeteria tables

How Can AI Help Us With What We Care About?

Instead of making us part of things we don’t care about?

Despite the misguided hype, AI is just another tool. So it is encouraging to read about the ways that Japanese firm Hitachi is using AI as a tool to provide services that would otherwise be difficult or unavailable.

Read More ›