Mind Matters Natural and Artificial Intelligence News and Analysis
numbers waves face
Birth of Virtual Consciousness

Neuroscientist: Conscious AI Is Not an Insurmountable Problem

Neuroscientist: Conscious AI is not an insurmountable problem
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Neuroscientist Ryota Kanai, founder and CEO of Tokyo-based startup Araya, aims to “understand the computational basis of consciousness and to create conscious AI.” He isn’t sure, he says, if we want AI to be conscious. But, technically, he doesn’t see it as an insurmountable problem:

If we can’t figure out why AIs do what they do, why don’t we ask them? We can endow them with metacognition—an introspective ability to report their internal mental states. Such an ability is one of the main functions of consciousness. It is what neuroscientists look for when they test whether humans or animals have conscious awareness. For instance, a basic form of metacognition, confidence, scales with the clarity of conscious experience. When our brain processes information without our noticing, we feel uncertain about that information, whereas when we are conscious of a stimulus, the experience is accompanied by high confidence: “I definitely saw red!” …

If we consider introspection and imagination as two of the ingredients of consciousness, perhaps even the main ones, it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine. We want our machines to explain how and why they do what they do. Building those machines will exercise our own imagination. It will be the ultimate test of the counterfactual power of consciousness.

Ryota Kanai, “Do you want AI to be conscious?” at Nautilus (June 9, 2021)

Between the two statements quoted above, Kanai offers information about his team’s various efforts at causing machines to think like people, assuming that the basis of consciousness is computational.

It all seems confused. First, “metacognition” means “thinking about what we are thinking.” To do that, we must actually be thinking, not computing. Developing a machine that can think, as opposed to merely compute, would seem like a good first step. Anything like consciousness (which includes metacognition) is well beyond that.

Also, what does it mean to say, “it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine”? Immortality seems “so clearly useful” to human beings too. Is it inevitable that we will conjure it up?

The essay is a classic in promissory thinking: Past successes predict future successes. But not so fast. Everything has limits. It is easy to make great strides when we are well within those limits — bigger, faster, cheaper computers come to mind. But it is precisely at the fuzzier boundaries that gains become harder.

Close-up Of A Robot's Hand Holding Stethoscope On Colorful Background`

Philosopher of technology George Gilder reminds us in Gaming AI that the belief that current AI triumphs will be endlessly replicated is based on a misunderstanding. Computers triumphed at chess and go because the game board — a map, if you like — is the territory. Thus the territory is fully computable. But, in most matters in life, the map is a guide, not the territory. Thus much of the thinking we need to do is non-computational. Creativity, which computers don’t do, is essential.

For example, the reason IBM Watson flopped at replacing doctors wasn’t lack of sophisticated technology but the fact that technology of any kind is only one component of the practice of medicine. Thus, the news that Sophia the robot is being retooled to help with senior health care, due to shortages of personnel, is not especially reassuring.

Under the circumstances, talk of AI becoming “conscious” feels like science fiction without the graphics.


You may also wish to read:

Can we apply tests for consciousness to artificial intelligence? A robot could be programmed to say Ow! and withdraw its hand from a hot object. But did it feel anything? Angus Menuge points out that the difficulty with identifying AI consciousness would be determining whether consciousness is being duplicated or merely mimicked.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Neuroscientist: Conscious AI Is Not an Insurmountable Problem