Mind Matters Natural and Artificial Intelligence News and Analysis
ciprian-boiciuc-193062-unsplash

AI machines taking over the world?

It’s a cool apocalypse but does that make it more likely?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Doomsday thinking is easily mocked. The character marching, hairy and barefoot under his “The End Is Near” sign, is a staple of cartoons in middlebrow mags. Yet when media magnets market doomsday scenarios—like the late Stephen Hawking (“worst event in the history of our civilization”) and Elon Musk (“an immortal dictator from which we would never escape”) — it’s a cool apocalypse.

Only recently, Henry Kissinger emerged from the 1970s to worry that “ human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them.”

Swedish philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, noted in 2014 that roughly half the world’s AI specialists think that human-level machine intelligence will be achieved by 2040 and 90 percent opt for 2075: “Biological humans, even if enhanced, will be outclassed.”

But in this case, few laugh at the claims. Do the science celebs know something we don’t—or are they assuming something we aren’t? Either way, it’s a wonder Chicken Little doesn’t file a grievance…

The AI prophets are indeed assuming something we aren’t. One thing that they do not discuss much is the fact that human-level intelligence is bound up with consciousness and we do not know what consciousness even is. Every quarter, it seems, a new groundbreaking theory of consciousness is aired in the science media. Views aired in mainstream publications include the postulate that everything is conscious (your handheld is already conscious and so is your coffee mug) and that consciousness is merely an illusion. (whose illusion consiousness can even be is not under discussion).

Twenty years ago, two prominent neuroscientists made a bet that a signature of human consciousness would be found in the brain within the next twenty-five years. The bet has only five years to run and there is not much on the horizon. A reasonable response to a claim that machines can achieve human intelligence or consciousness is to wonder how machines can be developed to achieve a goal that cannot be clearly defined or even identified.

It’s not as though there haven’t been failed AI predictions. In 1956, top programmers predicted that they could get machines to “use language, form abstract concepts and even improve themselves” over a single summer.

Their conference did, however, succeed in coining the term “artificial intelligence.”

When reviewing 2018’s “mind-boggling predictions”  for AI such as “We could accurately predict the future, based on data and high-level analytics,” we might want to look at AI expert Rodney Brooks’s more skeptical approach in “The Seven Deadly Sins of AI Predictions” ( Technology Review, 2017):

The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.

Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons.

The second of his Seven Deadly Sins is reliance on magical thinking:

This is a problem I regularly encounter when trying to debate with people about whether we should fear artificial general intelligence, or AGI—the idea that we will build autonomous agents that operate much like beings in the world. I am told that I do not understand how powerful AGI will be. That is not an argument. We have no idea whether it can even exist. I would like it to exist—this has always been my own motivation for working in robotics and AI. But modern-day AGI research is not doing well at all on either being general or supporting an independent entity with an ongoing existence. It mostly seems stuck on the same issues in reasoning and common sense that AI has had problems with for at least 50 years. All the evidence that I see says we have no real idea yet how to build one. Its properties are completely unknown, so rhetorically it quickly becomes magical, powerful without limit.

Nothing in the universe is without limit.

Watch out for arguments about future technology that is magical. Such an argument can never be refuted. It is a faith-based argument, not a scientific argument. More.

The usual problems with doomsaying also apply to predictions for artificial intelligence. For example, most doomsdays of any kind don’t happen because many unforeseen sequences of decisions and events change the scene. The doomsday predictions for the first Earth Day in 1970 (“civilization will end within 15 or 30 years unless immediate action is taken against problems facing mankind.”) were largely obviated by the fact that the usual muddle of public and private actions was all that was needed to substantially reduce pollution.

And just think: The Earth Day people had the advantage that what pollution is and the remedies for it were straightforward. We can hardly say the same for human consciousness.


AI machines taking over the world?