Our Walter Bradley Center director Robert J. Marks is back with Jonathan Bartlett and Eric Holloway, assessing their Top Ten real advances (“Smash Hits”) in AI in 2020. Readers may recall that we offered a fun series during the holidays about the oopses and ums and ers in the discipline (typically hyped by uncritical sources). So now we celebrate the real achievements and our nerds think that #7 is honest recognition of the vulnerabilities of machine learning.
Our story begins at 19:37. Here’s a partial transcript. (Show Notes and Additional Resources follow, along with a link to the complete transcript.)
Robert J. Marks:Hacking AI and exposing vulnerabilities in machine learning? What’s going on here Eric?
Eric Holloway: AI suffers from a problem known as “underspecification.” …
Because there’s such huge parameter models you don’t really know what the AI does outside of its dataset. Now, a lot of the times you’ve kind of interpolated between data points, so between those points, maybe you can know what’s going on, but there’s a lot of unknown areas in there. And hackers can prod those unknown areas and nudge the AI models in directions that the hackers want the models to go. And that’s just, I think, is an inescapable symptom of our AI systems, because to make these things work in the real world you have to have these really high parameter models to fit really complex data, but the paradox of the situation is that they become very brittle and much more easier to manipulate.
Robert J. Marks: Well in fact you hear about the deep convolutional neural networks trained on images, and all of a sudden you change a pixel or two in an image and the deep convolutional neural network is totally wrong, so they are incredibly brittle. So it’s this sort of thing that you’re talking about, right?
Eric Holloway: Yeah, and it’s not just the result is completely wrong, but the machine’s confidence in it’s result is complete certainty. It’s absolutely certain about the wrong result. And in this particular example, they took, I think, a self-driving AI and they could just subtly manipulate traffic signs and make the AI make very disastrous decisions. Like for example, they gave it a sign that said speed limit 35, and they changed the number three slightly so the AI thought it was 85:
In an 18-month-long research process, Trivedi and Povolny replicated and expanded upon a host of adversarial machine-learning attacks including a study from UC Berkeley professor Dawn Song that used stickers to trick a self-driving car into believing a stop sign was a 45-mile-per-hour speed limit sign. Last year, hackers tricked a Tesla into veering into the wrong lane in traffic by placing stickers on the road in an adversarial attack meant to manipulate the car’s machine-learning algorithms.Patrick Howell O’Neill, “Hackers can trick a Tesla into accelerating by 50 miles per hour” at MIT Technology Review
Note: It’s been worse. At Google in 2015, AI identified a black man and his friends as gorillas, due to age-old glitches in photography, swopped into AI. In 2018, Amazon had to dump an AI human resources hiring program that penalized intelligent women.
Hint: AI will not do your thinking for you. Glad we’ve cleared that up. 😉
Here are the Smash Hits to date:
8 AI 2020 Smash Hit: Big gains in practical self-driving cars. The people who have been pursuing Level Five self-driving are nowhere but Level Four is working well. Jonathan Bartlett: You can think of Level Four self-driving as an engineering project and Level Five as a philosophy project
9 AI Success: Smarter cars for non-millionaires If your car is a recent model, an affordable aftermarket kit might transform it into a much smarter car. One possible risk is that a hacker could take over your car but, no matter what we do with AI, we must deal with security issues.
10 Smash Hit: #10 AI Success!: Translation gets faster and better. Machine translation, properly used, can help us communicate better. What’s made AI tech translation work so well is not that it’s perfect, but we’re going to have a second pass.
- 02:10 | Introducing Jonathan Bartlett
- 02:39 | Introducing Dr. Eric Holloway
- 03:11 | #10: Text translation (Microsoft – https://docs.microsoft.com/en-us/ai-builder/prebuilt-text-translation, Apple – https://apps.apple.com/us/app/translate-translator-ai/id1375535400, DeepL – https://www.deepl.com/en/home)
- 09:19 | #9: “We hit the road with Comma.ai’s assisted-driving tech at CES 2020” (Road Show by CNET), comma two at the Comma.ai shop
- 13:31 | #8: “Daimler, Waymo, and GM Make Big Gains in Level 4 Self-Driving” (Mind Matters News)
- 19:37 | #7: “Hacking AI: Exposing Vulnerabilities in Machine Learning” (Datanami)
- 22:35 | #6: “After Thursday’s Dogfight, It’s Clear: DARPA Gets AI Right” (Mind Matters News)
- Jonathan Bartlett at Discovery.org
- Eric Holloway at Discovery.org
- #10: Text translation (Microsoft – https://docs.microsoft.com/en-us/ai-builder/prebuilt-text-translation, Apple – https://apps.apple.com/us/app/translate-translator-ai/id1375535400, DeepL – https://www.deepl.com/en/home)
- #9: “We hit the road with Comma.ai’s assisted-driving tech at CES 2020” (Road Show by CNET), comma two at the Comma.ai shop
- #8: “Daimler, Waymo, and GM Make Big Gains in Level 4 Self-Driving” (Mind Matters News)
- #7: “Hacking AI: Exposing Vulnerabilities in Machine Learning” (Datanami)
- #6: “After Thursday’s Dogfight, It’s Clear: DARPA Gets AI Right” (Mind Matters News)