Mind Matters Natural and Artificial Intelligence News and Analysis
Stop Sign with damage yuliyakosolapova-DmtblAatFtk unsplash
Photo by Yuliya Kosolapova on Unsplash

McAfee: Assisted Driving System Is Easily Fooled

Defacing a road sign caused the system to dramatically accelerate the vehicle
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Two-inches of tape is not much. But placed correctly, it can create havoc. That’s what security researchers at McAfee discovered recently when testing a Tesla assisted driving system:

Don’t believe your car’s lying eyes. Hackers have manipulated multiple Tesla cars into speeding up by 50 miles per hour. The researchers fooled the car’s Mobileye EyeQ3 camera system by subtly altering a speed limit sign on the side of a road in a way that a person driving by would almost never notice.

Patrick Howell O’Neill, “Hackers can trick a Tesla into accelerating by 50 miles per hour” at Technology Review

The researchers tricked the Tesla into autonomously accelerating by extending the middle part of the number three on a 35 MPH sign by two inches. Thus its Mobileye EyeQ3 misread the sign as indicating a speed limit of 85 MPH (137 kph)… and began accelerating.

Fortunately, this all took place on a test track under researcher control. Yes, it’s a strategy. As explained at McAfee’s blog, “It’s one of those very rare times when researchers can lead the curve ahead of adversaries in identifying weaknesses in underlying systems.” (February 19, 2020)

Which is a roundabout way of saying, better we discover and correct this problem now rather than wait till altering road signs becomes a craze among juvenile delinquents a decade from now.

But that’s not all. The researchers also fooled the system into believing that a stop sign was a 35 MPH sign (through the placement of a single white square on the sign), that a modified 35 MPH sign read 45 MPH, and that another stop sign signaled that a lane had been added to the road.

To be fair, the modified signs fooled the MobilEye EyeQ3 camera that was on board a 2016 Tesla. Tesla no longer uses this model and MobilEye claims that more recent models are not so easily fooled.

For some time, researchers have been using adversarial images to test the limits of AI image recognition. But the research has unexpectedly highlighted a deeper problem.

When MobilEye executives were first informed of the weakness with respect to altered signs, a company spokesperson minimized the problem by suggesting that “the modified sign would fool even a human into reading 85 instead of 35.” (Technology Review) I disagree, but let’s grant the MobilEye executives their belief. More to the point, would a human driver automatically speed up to 85 MPH just because the sign appears to make it legal?

Picture the scene: You’re driving along a road marked for 35 MPH travel. It is not a freeway or a highway; it might be a major arterial or secondary road littered with driveways, stop lights, and intersections. Would you, even for a fraction of a second, consider driving that road at 85 MPH even if it were legal? Wouldn’t you just take it slow and easy at a conventional safe speed?

If you did accelerate, you could still be ticketed if you were spotted by police in one of the many states have “safe driving laws.” These laws override specific road rules: If your driving is deemed unsafe, you are in violation.

Really, this story is not about whether we can fool a Tesla or any other assisted driving system into misreading a sign. Due to adversarial testing, over time, they will probably become harder to fool. My point is rather that a fooled human makes a better decision than a fooled machine because the fooled human has common sense, awareness, and a mind that reasons.

The cameras and algorithms behind assisted driving systems need significant improvement before manufacturers can claim any reliable degree of safety. But, without the insight of a human mind, such systems will always be at risk of obvious mistakes that are not overruled by human factors.

Assisted driving systems are best used if they remain just that: Systems that help us be better drivers, not machines wresting control from us when they do not have minds that can address all the possible circumstances.


Here are some other recent articles by Brendan Dixon on self-driving cars and safety issues:

Death spurs demand for more oversight of self-driving cars. The National Transportation Safety Board seeks uniform standards for the previously voluntary information provided by carmakers “Despite the hype and a few bad actors, here at the Walter Bradley Institute, we believe in AI. Some of our Fellows have made major contributions to its development. But, while we are not Luddites, neither are we doe-eyed believers in ‘all things AI.’ That’s why we pay so much attention to oversight.” 

Would selling self-driving cars sooner save lives? Not if we look more closely at the statistics.

Are self-driving cars really safer? A former Uber executive says no. Before we throw away the Driver’s Handbook… As with so many statistics bandied around, that depends on what you count and what you leave out. 

Will industry pressure loosen self-driving car tests? Right now, the regulatory agency is under pressure to accept the industry’s “softball” testing suggestions.

Should Tesla’s Autopilot feature be illegal? A recent study from the United Kingdom on driver competence suggests that maybe it should.

Autopilot is not just another word for “asleep at the wheel” As a recent fatal accident in Florida shows, even sober, attentive drivers often put too much trust into Tesla’s Autopilot system, with disastrous results.

and

Expert: We won’t have self-driving cars for a decade: Machine Learning rapidly moved self-driving cars from the lab to the roads but the underlying technology remains brittle


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

McAfee: Assisted Driving System Is Easily Fooled