Mind Matters Natural and Artificial Intelligence News and Analysis
prosthetic arm fist.jpg
Prosthetic robotic arm with palm in fist, 3d rendering on black background

Nobel Prize Economist Tells The Guardian, AI Will Win

But when we hear why he thinks so, don’t be too sure
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Nobel Prize-winning economist (2002) Daniel Kahneman, 87 (pictured), gave an interview this month to The Guardian in which he observed that belief in science is not much different from belief in religion with respect to the risks of unproductive noise clouding our judgment. He’s been in the news lately as one of the authors of a new book, Noise: A Flaw in Human Judgment, which applies his ideas about human error and bias to organizations. He told The Guardian that he places faith “if there is any faith to be placed,” in organizations rather than individuals, for example. Curiously, he doesn’t seem to privilege science organizations:

I was struck watching the American elections by just how often politicians of both sides appealed to God for guidance or help. You don’t talk about religion in the book, but does supernatural faith add to noise?

I think there is less difference between religion and other belief systems than we think. We all like to believe we’re in direct contact with truth. I will say that in some respects my belief in science is not very different from the belief other people have in religion. I mean, I believe in climate change, but I have no idea about it really. What I believe in is the institutions and methods of people who tell me there is climate change. We shouldn’t think that because we are not religious, that makes us so much cleverer than religious people. The arrogance of scientists is something I think about a lot.

Tim Adams, “Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’” at The Guardian, (May 16, 2021)

He is surer of the competence of AI than that of scientists:

Do you feel that there are wider dangers in using data and AI to augment or replace human judgment?

There are going to be massive consequences of that change that are already beginning to happen. Some medical specialties are clearly in danger of being replaced, certainly in terms of diagnosis. And there are rather frightening scenarios when you’re talking about leadership. Once it’s demonstrably true that you can have an AI that has far better business judgment, say, what will that do to human leadership?

Tim Adams, “Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’” at The Guardian, (May 16, 2021)

A curious fact is that things are actually not working out that way in medicine. As Jeffrey Funk and Gary Smith noted recently at Slate,

Swayed by IBM’s Watson boasts, McKinsey predicted a 30–50 percent productivity improvement for nurses, a 5–9 percent reduction in health care costs, and health care savings in developed countries equal to up to 2 percent of GDP. The Wall Street Journal published a cautionary article in 2017, and soon others were questioning the hype. A 2019 article in IEEE Spectrum concluded that Watson had “overpromised and underdelivered.” Soon afterward, IBM pulled Watson from drug discovery, and media enthusiasm waned as bad news about A.I. health care accumulated. For example, a 2020 Mayo Clinic and Harvard survey of clinical staff who were using A.I.-based clinical decision support to improve glycemic control in patients with diabetes gave the program a median score of 11 on a scale of 0 to 100, with only 14 percent saying that they would recommend the system to other clinics.

Jeffrey Funk and Gary Smith, “Why A.I. Moonshots Miss” at Slate (May 4, 2021)

Is that a temporary or permanent limitation in scope? In any event, Funk and Smith are not alone in failing to notice an AI apocalypse:

AI researcher Melanie Mitchell’s recent paper outlines the problems with assuming that narrow AI intelligence can easily be ramped up into general intelligence. That’s also the thrust of computer scientist Erik J. Larson’s new book, The Myth of Artificial Intelligence.. There’s even a film out now about the hope, hype, and crash of the Human Brain Project. Is it possible that reality-based thinking is trending?

News, “Failed prophecies of the big “AI takeover” come at a cost” at Mind Matters News (May 5, 2021)

One may well ask why AI would perform any differently in a takeover of business decisions. Back to The Guardian, where it is suggested that such caution is a backlash:

Are we already seeing a backlash against that? I guess one way of understanding the election victories of Trump and Johnson is as a reaction against an increasingly complex world of information – their appeal is that they are simple impulsive chancers. Are we likely to see more of that populism?

I have learned never to make forecasts. Not only can I certainly not do it – I’m not sure it can be done. But one thing that looks very likely is that these huge changes are not going to happen quietly. There is going to be massive disruption. The technology is developing very rapidly, possibly exponentially. But people are linear. When linear people are faced with exponential change, they’re not going to be able to adapt to that very easily. So clearly, something is coming… And clearly AI is going to win [against human intelligence]. It’s not even close. How people are going to adjust to this is a fascinating problem – but one for my children and grandchildren, not me.

Tim Adams, “Daniel Kahneman: ‘Clearly AI is going to win. How people are going to adjust is a fascinating problem’” at The Guardian, (May 16, 2021)

Some people do make forecasts. Futurist Ray Kurzweil has not been afraid to prophesy, as Funk and Smith note:

In 2014, Ray Kurzweil predicted that by 2029, computers will have human-level intelligence and will have all of the intellectual and emotional capabilities of humans, including “the ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy.” As we move closer to 2029, Kurzweil talks more about 2045.

Jeffrey Funk and Gary Smith, “Why A.I. Moonshots Miss” at Slate (May 4, 2021)

All trends eventually taper off or change direction, even if they are exponential. Perhaps the big question is, why would AI change that?


You may also wish to read:

No AI overlords?: What is Larson arguing and why does it matter? Information theorist William Dembski explains, computers can’t do some things by their very nature. If a needed thought process is not computational, a bigger or better computer is not the answer.

and

Failed prophecies of the big “AI takeover” come at a cost. Like IBM Watson in medicine, they don’t just fail; they take time, money, and energy from more promising digital innovations. Business profs Jeffrey Funk and Gary Smith compare the costs vs. benefits of AI hype vs. small innovations that change the world.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Nobel Prize Economist Tells The Guardian, AI Will Win