Mind Matters Natural and Artificial Intelligence News and Analysis
Snowman against Alpine panorama
Snowman against Alpine panorama
©HappyAlex/stock.adobe.com

If You Think Common Sense Is Easy to Acquire…

Try teaching it to a state-of-the-art self-driving car. Start with snowmen.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Imagine taking a driver’s test. As you round a corner, you see a snowman perched perilously at the side of the road. Immediately, you jump on the brakes because—who knows?—that snowman might—just might—cross the road.

Congratulations! You just failed your driving test.

We laugh. No sensible driver would stop and wait for a snowman. But a self-driving car very well might. And it’s not only snowmen that challenge AI; it is a broad range of unexpected encounters:

— an ice cream truck stopped at the side of the road is likely to draw children from who knows where, even if you cannot yet see them.
— a ball bouncing into the street means a child will probably follow.
— a person sitting in a parked car just might swing open the door without first looking.
— a car moving into your lane a few cars ahead may cause a sudden slowdown…

Self-driving car entrepreneurs promise that they will reduce accidents and save us from ourselves but they are making promises based on hopes, not data:

The line is blurred most notably by Elon Musk. While Tesla officially insists its customers are always in control of the car, its CEO promises its cars will soon be “fully self-driving” and appears in places like CBS’ 60 Minutes, driving on Autopilot with his hands in his lap. Meanwhile, Tesla’s cars have repeatedly made news by crashing into stopped fire trucks and turning semis. But Tesla is just the most prominent automaker making a mess with marketing names that use terms like pilot, cruise, and assist in different combinations and that people struggle to understand.

Alex Davies, “Don’t overestimate the ‘semi’ in semiautonomous cars” at Wired

The AI behind such promises cannot make the sound decisions these situations require.

Melanie Mitchell, Professor of Computer Science at Portland State University, does not think that more training data will, in itself, solve the problem:

You go around a curve, and suddenly see something in the middle of the road ahead. What should you do?

Of course, the answer depends on what that ‘something’ is. A torn paper bag, a lost shoe, or a tumbleweed? You can drive right over it without a second thought, but you’ll definitely swerve around a pile of broken glass. You’ll probably stop for a dog standing in the road but move straight into a flock of pigeons, knowing that the birds will fly out of the way. You might plough right through a pile of snow, but veer around a carefully constructed snowman. In short, you’ll quickly determine the actions that best fit the situation – what humans call having ‘common sense’.

Melanie Mitchell, “How do you teach a car that a snowman won’t walk across the road?” at Aeon

What self-driving cars lack, and what we have, is common sense. Put more bluntly: Self-driving cars can and will do only what their training data taught them. And the world of driving, let alone the world encompassing our lives, is too big and too varied to fit within training data and algorithms.

Common sense has been, and remains, the pinnacle of AI hopes. An AI with common sense would see and reason about the world as well as, or better, than a human. But there’s the problem: AI does not reason. Minds reason. AI systems deliver only wooden responses; which Mitchell, more or less, admits:

Today’s most successful AI systems use deep neural networks. These are algorithms trained to spot patterns, based on statistics gleaned from extensive collections of human-labelled examples… While we’ve made remarkable progress, the machine intelligence of our current age remains narrow and unreliable.

Melanie Mitchell, “How do you teach a car that a snowman won’t walk across the road?” at Aeon

She hopes that if we can somehow train AI like we train children, we can create (grow?) an AI with the common sense needed to navigate the world. But she forgets that children have minds to train; AI has only circuits.


Also by Brendan Dixon: News from the real world of self-driving taxis: Not yet, WayMo includes a human in all their “robotaxis,” just in case, because the vehicles (at last report) were still confounded by common conditions

and

Yes, there ARE ghosts in the machine And one of them is you. You power AI whenever you prove your humanity to the CAPTCHA challenges overrunning the web. AI systems are not some alien brain evolving in our mids. They are machines we build and train by embedding our humanity into their programming.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

If You Think Common Sense Is Easy to Acquire…