Mind Matters Natural and Artificial Intelligence News and Analysis
Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.
Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.

AI Researcher: Stop Calling Everything “Artificial Intelligence”

It’s not really intelligence, says Berkeley’s Michael Jordan, and we risk misunderstanding what these machines can really do for us
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Computer scientist Michael I. Jordan, a leading AI researcher, says today’s artificial intelligence systems aren’t actually intelligent and people should stop talking about them as if they were:

They are showing human-level competence in low-level pattern recognition skills, but at the cognitive level they are merely imitating human intelligence, not engaging deeply and creatively, says Michael I. Jordan, a leading researcher in AI and machine learning. Jordan is a professor in the department of electrical engineering and computer science, and the department of statistics, at the University of California, Berkeley.

Katy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says” at IEEE Spectrum (March 31, 2031)

Their principal role, he says, is to “augment human intelligence, via painstaking analysis of large data sets in much the way that a search engine augments human knowledge by organizing the Web.”

We see the their value in many fields today. For example, machine learning can motor through thousands of ancient cuneiform texts that may yield valuable information but no human has the thousands of hours a year that would be required to find out. It can decipher a charred, unwrappable scroll from an ancient synagogue fire that turns out to be a portion of the Book of Leviticus, the oldest authentic manuscript found so far. It can motor through largely unchanging skies, detecting the faintest ripple, which astronomers can then analyze —making their research time much more efficient.

But ask AI what it all really means and you will not get so much as a blank stare. (Or, depending on the program, you could get a vast autobabble from the internet of various pundit’s opinions — which may be a mere distraction at the time.)

Jordan (pictured) adds,

“People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”

Katy Pretz, “Stop Calling Everything AI, Machine-Learning Pioneer Says” at IEEE Spectrum (March 31, 2031)

Jordan isn’t alone in saying this. Ben Medlock, co-founder of SwiftKey, has cautioned against the idea that computers can just somehow evolve intelligence (they can’t “evolve” anything because they aren’t alive).

David Watson of the Oxford Internet Institute and the Alan Turing Institute discourages us from thinking that computers are just metal people, longing to be understood: He terms deep neural networks (DNNs) like AlphaZero, “brittle, inefficient, and myopic,” and these limitations are often not recognized.

The basic problem is that popular media are apt to highlight visions of artificial superintelligence taking over, visions expressed by, say, Stephen Hawking, Martin Rees, and Richard Dawkins. The views of experts who develop and work with the technology may be downplayed because they are not a sensation. So the rest of us get a skewed picture.

In reality, artificial intelligence has many limitations, including the fact that computers are faster but not more intelligent, no computer demonstrates creativity, and computers don’t experience things (which limits comprehension). As a result, AI achievements are necessarily narrowly focused.

A common media-driven misunderstanding is the idea that computer successes in games like chess predict an AI ability to take over the world. As philosopher and futurist George Gilder points out in Gaming AI, games like chess are played on a map. The rules regulate an unchanging map; thus, there may be millions of possibilities but they are, in theory, computable. Real life is not like that. The structure of a game evades, by definition, just the types of events that happen constantly in a real world where, as we often complain, the map is not the territory. (= “No one told me it would be like this!”) Creative thinking (which is different from computing) is the key to success in the real world.

It’s good to be reminded of that every so often.


You may also wish to read: Can AI really evolve into superintelligence all by itself? We can’t just turn a big computer over to evolution and go away and hope for great things. Jonathan Bartlett: If someone were to invent a universally good search through a search space, it would have to be done on something that isn’t a computer. Computers are powerful because they have limitations.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

AI Researcher: Stop Calling Everything “Artificial Intelligence”