Mind Matters Natural and Artificial Intelligence News and Analysis
robot-eyes-closeup-stockpack-adobe-stock
Robot eyes closeup

Sure, AI Could Run the World — Except for Its Fundamental Limits

But many of the basic errors, problems, and limitations have no easy solution
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

We are told that not only will AI take our jobs but it will take our bosses’ jobs and their bosses’ jobs and pretty soon., AI will be running the world…

We can see those films on Netflix any night.

Science writer and science fiction author Charles Q. Choi offers, in a longish piece at the Institute of Electrical and Electronic Engineers’ online magazine, Spectrum, talking about the real world where “Neural networks can be disastrously brittle, forgetful, and surprisingly bad at math.” AI frequently flubs and it is not clear how to make it flub less. Here are brief notes on three examples of the seven he offers:

“Brittle” 97% of AIs could not identify a school bus flipped on its side. Not helpful in an emergency.

There are numerous troubling cases of AI brittleness. Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. Neural networks can be 99.99 percent confident that multicolor static is a picture of a lion. Medical images can get modified in a way imperceptible to the human eye so medical scans misdiagnose cancer 100 percent of the time. And so on.

Charles Q. Choi, “7 revealing ways ais fail” at IEEE Spectrum (September 21, 2021)
Concept of robots replacing humans in offices

There are doubtless ways to reduce bad guesses. But we are dealing with systems where no independent thinking is involved so progress may be slow, variable, and insecure.

“Forgetful” Instead of building on memory from year to year, AI can “forget” important stuff.

In the beginning, the researchers trained their neural network to spot one kind of deepfake. However, after a few months, many new types of deepfake emerged, and when they trained their AI to identify these new varieties of deepfake, it quickly forgot how to detect the old ones.

This was an example of catastrophic forgetting—the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information, essentially overwriting past knowledge with new knowledge. “Artificial neural networks have a terrible memory,” Tariq says.

Charles Q. Choi, “7 revealing ways ais fail” at IEEE Spectrum (September 21, 2021)

Again, proposed remediation strategies may very well work but the limitation remains fundamental: There is no one “in there” to do the remembering. No one “in there” is concerned about forgetting.

“Surprisingly bad at math” Despite some Ais crunching huge numbers, most are not as reliable as a pocket calculator, Choi reports:

For example, Hendrycks and his colleagues trained an AI on hundreds of thousands of math problems with step-by-step solutions. However, when tested on 12,500 problems from high school math competitions, “it only got something like 5 percent accuracy,” he says. In comparison, a three-time International Mathematical Olympiad gold medalist attained 90 percent success on such problems “without a calculator,” he adds.

Charles Q. Choi, “7 revealing ways ais fail” at IEEE Spectrum (September 21, 2021)

That limits AIs’ usefulness in science research. It’s not clear why AIs are bad at math but it might be due to the fact that math requires a series of steps and AIs use parallel processing. In any event, there is no one “in there” who wants to solve the problem.

If your job requires common sense, knowledge of the world, a good memory, and basic math skills — and AI is really coming for your job — that could be the worst news your boss heard all year. It appears that there are some things computers don’t do by their very nature.


You may also wish to read:

Why human creativity is not computable. There is a paradox involved with computers and human creativity, something like Gödel’s Incompleteness Theorems or the Smallest Uninteresting Number.

A type of reasoning AI can’t replace. Abductive reasoning requires creativity, in addition to computation.

and

No AI overlords? What is Larson arguing and why does it matter? Information theorist William Dembski explains, computers can’t do some things by their very nature.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Sure, AI Could Run the World — Except for Its Fundamental Limits