Mind Matters Natural and Artificial Intelligence News and Analysis
artificial-intelligence-concept-robotic-hand-is-holding-human-brain-3d-rendered-illustration-stockpack-adobe-stock.jpg
Artificial intelligence concept. Robotic hand is holding human brain. 3D rendered illustration.

Failed Prophecies of the Big “AI Takeover” Come at a Cost

Like IBM Watson in medicine, they don’t just fail; they take time, money, and energy from more promising digital innovations
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Surveying the time line of prophecies that AI will take over “soon” is entertaining. At Slate, business studies profs Jeffrey Funk and Gary Smith offer a whirlwind tour starting in the 1950s, with stops along the way at 1970 (“In from three to eight years we will have a machine with the general intelligence of an average human being”) and at 2014:

In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045.

Jeffrey Funk and Gary Smith, “Why A.I. Moonshots Miss” at Slate (May 4, 2021)

Does he indeed? Advisedly.

But for Funk and Smith, there is a serious side to all this: “Moonshots” (apocalyptic prophecies that are statistically likely to fail) cost time and money. How much money? Henry Markram’s Human Brain Project, aimed at reverse-engineering the human brain via a supercomputer, crashed in 2015, after the European Union had promised 1.3 billion.

Bold predictions of great gains (and great riches) start with the assumption that humans are easily replaced, even in delicate fields such as health care. The iconic example is Watson, IBM’s Jeopardy champ repurposed as a health care system:

Swayed by IBM’s Watson boasts, McKinsey predicted a 30–50 percent productivity improvement for nurses, a 5–9 percent reduction in health care costs, and health care savings in developed countries equal to up to 2 percent of GDP. The Wall Street Journal published a cautionary article in 2017, and soon others were questioning the hype. A 2019 article in IEEE Spectrum concluded that Watson had “overpromised and underdelivered.” Soon afterward, IBM pulled Watson from drug discovery, and media enthusiasm waned as bad news about A.I. health care accumulated. For example, a 2020 Mayo Clinic and Harvard survey of clinical staff who were using A.I.-based clinical decision support to improve glycemic control in patients with diabetes gave the program a median score of 11 on a scale of 0 to 100, with only 14 percent saying that they would recommend the system to other clinics.

Jeffrey Funk and Gary Smith, “Why A.I. Moonshots Miss” at Slate (May 4, 2021)

Perhaps some visionaries had not considered that, in health care, human interactions are important. In assessing why Watson was turning into a flop in medicine back in 2019, Gary Smith noted that computers don’t know what’s sense and what’s nonsense or “which output is relevant and which output is irrelevant.” Failures at that level would not inspire confidence in patients and it is not clear how to prevent them.

Funk and Smith remind us in closing that the reason the prophecies failed is, “we didn’t anticipate that building a computer that surpasses the human brain is the moonshot of all moonshots.” But, they note, not everything fails. The technologies that succeed are not typically moonshots. Transistors, home computers, and the internet were originally developed to meet everyday practical needs — but billions of people turned out to have similar needs.

The authors are not alone in raising these questions. AI researcher Melanie Mitchell’s recent paper outlines the problems with assuming that narrow AI intelligence can easily be ramped up into general intelligence. That’s also the thrust of computer scientist Erik J. Larson’s new book, The Myth of Artificial Intelligence. There’s even a film out now about the hope, hype, and crash of the Human Brain Project.

Is it possible that reality-based thinking is trending?


Check out this article by Jeffrey Funk and Gary Smith as well:

Stanford’s AI index report: how much is BS? Some measurements of AI’s economic impact sound like the metrics that fueled the dot-com bubble.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Failed Prophecies of the Big “AI Takeover” Come at a Cost