Our nerds here at the Walter Bradley Center have been discussing the AI hypes of the year. Our director Robert J. Marks, Eric Holloway and Jonathan Bartlett have been talking about 12 overyhyped AI ideas. From AI Dirty Dozen 2020 Part II, here’s #5 AI: can go psychotic due to lack of sleep!
Our story begins at 16:03. Here’s a partial transcript. Show Notes and Additional Resources follow, along with a link to the complete transcript.
The story started at Scientific American
Some types of artificial intelligence could start to hallucinate if they don’t get enough rest, just as humans do…
The change will come when (and if) AI systems that mimic living brains are incorporated into the wide range of devices that currently rely on conventional computers and microprocessors to help us through the day. At least that’s the implication of new research that we are conducting in Los Alamos National Laboratory to understand systems that operate much like the neurons inside living brains.Garrett Kenyon, “Lack of Sleep Could Be a Problem for AIs” at Scientific American
So your smart fridge must stop freezing ice cubes to catch a nap and your smart furnace has to stop keeping the pipes from freezing for relaxation… Well, if this is progress, why isn’t someone putting in a word for regress?
Robert J. Marks: Does artificial intelligence need to sleep, Eric?
Eric Holloway: (pictured) Yeah. I looked into this a bit. It’s a little bit hard to figure out what they mean exactly by sleep. And, it seems to be one of those cases where they’re trying really hard to make an analogy behind some obscure mathematical thing they do in everyday life just to make AI sound more humanlike. My best guess is… Well, what they say they do is they train these networks and then they have to subject the networks to waves of noise that, in their opinion, resemble something about the brainwaves during sleep. And, then apparently the networks are able to learn more effectively. What I suspect might actually be what they’re doing is they’re just adding random perturbation to the weights after some training which is a standard technique. And, they just happen to like one particular way of adding noise to the network.
Robert J. Marks: You know, that’s what struck me, too. There’s a method in training neural networks called simulated annealing wherein you do basically add noise into the training process to make it much more effective. And, there’s other things such as weight saturation avoidance, where all of the weights, all of the interconnects, are so big that they saturate each of the neurons. And, so you have to back them off a little bit. So, you have to halt your training in order to back these things off.
But. These are problems which have been known for 30 or 40 years. These are techniques which people have practiced for a heck of a long time. And, this is an example of what I refer to as seductive semantics. It’s like you said, Eric. They are trying to make this thing sound more human, and they do that by trying to relate it to human attributes when the relationship really isn’t there, is it?
So no. If your furnace doesn’ t work, i’s not AI. It’s just failure. Call the company.
Well, here’s the rest of the countdown to date. Read it and blink:
6 in our Top 12 AI hypes A Conversation bot is cool —If you really lower your standards. A system that supposedly generates conversation—but have you noticed what is says? Bartlett: you could also ask “Who was President in 1600,” and it would give you an answer, not recognizing that the United States didn’t exist in 1600.
7 AI Can Create Great New Video Games All by Itself! In our 2020 “Dirty Dozen” AI myths: It’s actually just remixing previous games. Eric Holloway describes it as like a bad dream of PACMan. Well, see if it is fun.
8 in our AI Hype Countdown: AI is better than doctors! Sick of paying for health care insurance? Guess what? AI is better ! Or maybe, wait… Only 2 of the 81 studies favoring AI used randomized trials. Non-randomized trials mean that researchers might choose data that make their algorithm work.
9: Erica the Robot stars in a film. But really, does she? This is just going to be a fancier Muppets movie, Eric Holloway predicts, with a bit more electronics. Often, making the robot sound like a real person is just an underpaid engineer in the back, running the algorithm a couple of times on new data sets. Also: Jonathan Bartlett wrote in to comment “Erica, robot film star, is pretty typical modern-day puppeteering — fun, for sure, but not a big breaththrough.
10: Big AI claims fail to work outside lab. A recent article in Scientific American makes clear that grand claims are often not followed up with great achievements. This problem in artificial intelligence research goes back to the 1950s and is based on refusal to grapple with built-in fundamental limits.
11: A lot of AI is as transparent as your fridge A great deal of high tech today is owned by corporations. Lack of transparency means that people trained in computer science are often not in a position to evaluate what the technology is and isn’t doing.
12! AI is going to solve all our problems soon! While the AI industry is making real progress, so, inevitably, is hype. For example, machines that work in the lab often flunk real settings.