Mind Matters Natural and Artificial Intelligence News and Analysis
Beautiful mandarin duck on the frozen lake in a park
Beautiful mandarin duck on the frozen lake in a park
Adobe Stock licensed

Just a light frost—or AI winter?

It’s nice to be right once in a while—check out the evidence for yourself
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

About a year ago, I wrote that mounting AI hype would likely give way to yet another AI winter. Now, according to the panelists at “the world’s leading academic AI conference” the temperature is already falling.

Most recent advances in AI have come through a pair of related technologies: Deep Learning and Neural Networks. The ideas beneath these, however, are more than 70 years old. Neural Network development, starting in 1944, predates the solid-state transistor (1947):

Warren McCulloch and Walter Pitts (1943) opened the subject by creating a computational model for neural networks…The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965, as the Group Method of Data Handling. The basics of continuous backpropagation were derived in the context of control theory by Kelley in 1960 and by Bryson in 1961, using principles of dynamic programming.

“Artificial Neural Networks” at Wikipedia

Sheer computing power, coupled with immense quantities of data (often gathered from the Internet), moved the ideas to the foreground. Tweaks and research, such as Reinforcement Learning, further improved the results. But, as I suggested last year, the hype was exceeding the actual promise.

At this year’s NeurIPS conference researchers admitted that the end is near:

“We’re kind of like the dog who caught the car,’ [Blaise] Aguera y Arcas [one of Google’s top researchers] said. Deep learning has rapidly knocked down some longstanding challenges in AI—but it doesn’t immediately seem well suited to many that remain.”

Tom Simonite, “A Sobering Message About the Future at AI’s Biggest Party” at Wired

Later, Yoshua Bengio, director of Mila, an AI institute in Montreal and one of the “godfathers” of Deep Learning noted that

… the technique yields highly specialized results; a system trained to show superhuman performance at one videogame is incapable of playing any other. ‘We have machines that learn in a very narrow way,’ Bengio said. ‘They need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.’”

Tom Simonite, “A Sobering Message About the Future at AI’s Biggest Party” at Wired

It wasn’t just researchers at a geeky conference who were forecasting gloom:

Discussion of the limitations of existing AI technology are growing too. Optimism from Google and others that self-driving taxi fleets could be deployed relatively quickly has been replaced by fuzzier and more restrained expectations. Facebook’s director of AI said recently that his company and others should not expect to keep making progress in AI just by making bigger deep learning systems with more computing power and data. ‘At some point we’re going to hit the wall,” he said. “In many ways we already have.”

Tom Simonite, “A Sobering Message About the Future at AI’s Biggest Party” at Wired

The misguided belief that our human intelligence is the product of an undirected, accidental process has encouraged unfounded expectations that AI could somehow just continue to happen. With those expectations falling short, what choice remains?

At the conference, researchers speculated that new techniques, perhaps more inspired by biology, will advance AI. I remain doubtful.

It is too bad that, in the face of their own data, they fail to draw the obvious conclusion: Minds do not spring from accidents, no matter how much time is allowed. Only a mind can create a mind.

If we entirely dispense with that view, as the current failing expectations for AI show, we end up with far less, not far more.


If you enjoyed this piece, here are some more of Brendan Dixon’s recent reflections on overblown claims for AI, especially when they conflict with culture:

Pizza robots get the pink slip. The technology was sheer genius; the pizza lousy.

Fan tries programming AI jazz, gets lots and lots of AI… Jazz is spontaneous, but spontaneous noise is not jazz

and

Boeing’s sidelined fuselage robots: What went wrong? It’s not what we learn, it’s what we forget

And, more seriously: AI Winter Is Coming: Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Just a light frost—or AI winter?