Mind Matters Natural and Artificial Intelligence News and Analysis
hype-word-card-stockpack-adobe-stock
Hype word card
Photo licensed via Adobe Stock

Isn’t It Time for an Artificial Intelligence Reality Check?

Why do we think we’re so close to artificial general intelligence (AGI) when there are so many obstacles to overcome?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The Singularity is coming! The Singularity is coming!

If you’re getting tired of hearing that “strong AI” is just around the corner, you’re not alone. The Stephen Hawkings, Ray Kurzweils, and Elon Musks of the world have been putting humanity on notice with predictions of machines overtaking humans for decades.

It’s either the dawn of utopia or the start of a nightmare, depending on who’s talking. And every time they’re issued, the media jumps on them, because being on the cusp of a new era of intelligent beings is news.

What’s missing from these confident claims, however, is a realistic assessment of the problems that rank-and-file computer scientists wrestle with every day — namely, the problem of intelligence.

In their single-minded zeal, the futurists assume that a bridge exists between narrow applications of AI and the general intelligence humans possess. But no such bridge exists. We’re not even on the right road to such a bridge.

In his recent book The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, computer scientist and tech entrepreneur Erik J. Larson explains how a simplified view of intelligence has permeated AI research since the beginning, retarding progress in the field and putting the dream of truly intelligent computers at odds with reality: “[W]hen myth masquerades as science and certainty, it confuses the public, and frustrates non-mythological researchers who know that major theoretical obstacles remain unsolved.”

The myth of AI, says Larson, can be traced back to an error about intelligence made by computer pioneer Alan Turing in the early 20th century.

Turing is credited with setting the foundations for computer science and the artificial intelligence program, but in setting out to prove his theory that intuition could be embodied in machines, he reduced human intelligence to problem-solving.

But this problem-solving view of intelligence leaves out critical components: human intelligence is situational, contextual,and externalized, part of “a broader system which includes your body, your environment, other humans, and culture as a whole.”

Without this broader context — powered by inference — AI applications aren’t truly intelligent.

Although the modern AI project has had some successes thus far in projects like machine translation, learning algorithms, and data crunching, Larson says a better term for those applications might be “human-task simulation.” The label “intelligent” hasn’t been earned yet.

How did we get this far into AI and still be at an impasse? Why do we think we’re so close to artificial general intelligence (AGI) when there are so many obstacles to overcome? A little context will help.

Up until the Industrial Revolution beginning in the 18th century, innovation in the technology of human tools moved at a much slower pace and happened locally wherever craftsmen worked. The new age of machines brought improvements in transportation, and cities soon sprang up around factories.

As philosopher Hannah Arendt puts it, Homo sapiens (literally: wise or knowledgeable man) became Homo faber (man the builder) as many put their faith in science and technology over religion and philosophy to ameliorate their existence and determine their future.

This techno-scientific view, sharpened by Charles Darwin’s assertion that humans evolved naturally as part of a great branching tree of life, redefined human nature. In an era of great technological innovation, human intelligence began to be considered by some in mathematical or mechanical terms, the mind being nothing more than a black box full of responses to outside stimuli, a complicated machine.

If we could build steam engines and skyscrapers, why couldn’t we build ourselves? Out of this milieu the artificial intelligence project was born.

Turing wasn’t the only one to misjudge the nature of intelligence in the early decades of AI. Jack Good, Turing’s fellow code breaker during World War II, developed the idea of ultra-intelligence in the 1960s. If a machine could be as smart as a human, he reasoned, it could become smarter and create an intelligence explosion.

UCLA computer scientist and award-winning sci-fi author Vernor Vinge carried forward Good’s idea of the Singularity, the point when machines overtake humans. In a technical paper for NASA presented in 1993, Vinge wrote: “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

Sounds familiar.

In his book, Larson shows us just how complex human intelligence is with a deep dive into the inner workings of reasoning. And he concludes with a look at the future of the myth, showing that there’s no way for current AI to “evolve” general intelligence.

To move forward, we need to start at the beginning again, acknowledging the true scope of human intelligence, reassessing our goals for the future of computing, and “pursuing an agenda where human ingenuity can thrive.”

(This story originally appeared at Newsmax. 10 September 2021.)


Andrew McDiarmid

Director of Podcasting and Senior Fellow
Andrew McDiarmid is Director of Podcasting and a Senior Fellow at the Discovery Institute. He is also a contributing writer to MindMatters.ai. He produces ID The Future, a podcast from the Center for Science & Culture that presents the case, research, and implications of intelligent design and explores the debate over evolution. He writes and speaks regularly on the impact of technology on human living. His work has appeared in numerous publications, including the New York Post, Houston Chronicle, The Daily Wire, San Francisco Chronicle, Real Clear Politics, Newsmax, The American Spectator, The Federalist, and Technoskeptic Magazine. In addition to his roles at the Discovery Institute, he promotes his homeland as host of the Scottish culture and music podcast Simply Scottish, available anywhere podcasts are found. Andrew holds an MA in Teaching from Seattle Pacific University and a BA in English/Creative Writing from the University of Washington. Learn more about his work at andrewmcdiarmid.org.

Isn’t It Time for an Artificial Intelligence Reality Check?