Mind Matters Natural and Artificial Intelligence News and Analysis

TagLaMDA

robot-learning-or-solving-problems-stockpack-adobe-stock
robot learning or solving problems

Can AI Ever Be Sentient? A Conversation with Blake Lemoine

AI can mimic sentience, but can it ever be sentient? On this episode, we return to our conversation with former Google engineer Blake Lemoine. Host Robert J. Marks has a lively back and forth with Lemoine, who made national headlines when, as an employee of Google, he claimed that Google’s AI software, dubbed LaMDA, might be sentient. Lemoine recounts his experience at Google and Read More ›

ai-analysis-artificial-intelligence-automation-big-data-brain-business-cg-cloud-computing-communication-computer-graphics-concept-creative-cyber-deep-learning-digital-transformation-ed-stockpack-adobe-stock
ai, analysis, artificial intelligence, automation, big data, brain, business, cg, cloud computing, communication, computer graphics, concept, creative, cyber, deep learning, digital transformation, ed

Lemoine and Marks: A Friendly Discussion on AI’s Capacities

Marks and Lemoine disagree on whether AI can be sentient

Today’s featured video from the 2022 COSM conference features a distinguished panel of artificial intelligence (AI) experts, include Blake Lemoine and Robert J. Marks. They debate the meaning of artificial intelligence, what the future holds for its application (both positive and negative), and how far AI can be taken in terms of mimicking and even exceeding human capabilities. Lemoine is famous for his claims on AI’s “sentience” and his work at Google on the Large Language Model system “LaMDA.” Marks, on the other hand, appreciates Lemoine’s view but strongly maintains that creativity is a uniquely human capacity, and that machines will never attain consciousness. For more on Marks’s views, consider purchasing his 2022 book Non-Computable You: What You Do That Read More ›

COSM2022-Nov10-174A0113-blake-lemoine-panel

Lemoine at COSM 2022: A Conversation on AI and LaMDA

Will AI ever become "sentient"?

Blake Lemoine, ex-Google employee and AI expert, sat down with Discovery Institute’s Jay Richards at the 2022 COSM conference last November. Together they discussed AI, Google, and how and why Lemoine got to where he is today. Lemoine famously claimed last year that LaMDA, Google’s breakthrough AI technology, had achieved sentience. Lemoine explains that many people at Google thought AI had the potential for sentience, but that such technology should not be made prematurely for fear of the negative impacts it could have on society. You can listen to their interesting and brief conversation in the video below, and be sure to see more sessions from the 2022 COSM conference featuring Lemoine and other leaders and innovators in technology on Read More ›

brain-psychology-mind-soul-and-hope-concept-art-3d-illustration-surreal-artwork-imagination-painting-conceptual-idea-stockpack-adobe-stock
Brain psychology mind soul and hope concept art, 3d illustration, surreal artwork, imagination painting, conceptual idea

Blake Lemoine and Robert J. Marks on the Mind Matters Podcast

Marks and Lemoine discuss sentience in AI and the question of the soul
Lemoine thinks AI can be sentient but Marks firmly rejects such a notion. While disagreeing, they maintained a respectful dialogue. Well worth listening to. Read More ›
digital-fractal-realms-stockpack-adobe-stock
Digital Fractal Realms

Blake Lemoine and the LaMDA Question

In this continuation of last week’s conversation, ex-Googler Blake Lemoine tells Robert J. Marks what originally got him interested in AI: reading the science fiction of Isaac Asimov as a boy in rural Louisiana. The two go on to discuss and debate sentience in AI, non-computable traits of human beings, and the question of the soul. Additional Resources

earth-at-night-from-outer-space-with-city-lights-on-north-america-continent-3d-rendering-illustration-earth-map-texture-provided-by-nasa-energy-consumption-electricity-industry-ecology-concepts-stockpack-adobe-stock
Earth at night from outer space with city lights on North America continent. 3D rendering illustration. Earth map texture provided by Nasa. Energy consumption, electricity, industry, ecology concepts.

Robert J. Marks on The Laura Ingraham Show

In response to those who believe AI will take over the world, Marks says, "Look at history."

Robert J. Marks, director of Discovery Institute’s Walter Bradley Center, recently appeared on a podcast episode with Fox News host Laura Ingraham to talk about artificial intelligence, tech, and Dr. Marks’s book Non-Computable You: What You Do That AI Never Will. Ingraham prefaced the conversation with some thoughts on the rapidly evolving technological world we find ourselves in, and the changes such developments are inflicting on society. In response to the futurism and unbounded optimism in AI systems like ChatGPT that many modern figures hold, Marks said that what computers do is strictly algorithmic, This leads us to the idea of whether or not there are non-computable characteristics of human beings, and I think there is growing evidence that there Read More ›

touching chatbot
Chatbot computer program designed for conversation with human users over the Internet. Support and customer service automation technology concept.

A Chat with Blake Lemoine on Google and AI Sentience

Former Google employee Blake Lemoine claimed that the Large Language Model LaMDA was a sentient being. The claim got him fired. In this episode, Lemoine sits down with Robert J. Marks to discuss AI, what he was doing at Google, and why he believes artificial intelligence can be sentient.   Additional Resources

artificial-intelligence-sentient-ai-thinking-for-itself-using-computational-data-and-a-human-like-sense-of-consciousness-and-conscience-generative-ai-stockpack-adobe-stock
Artificial intelligence - sentient AI thinking for itself using computational data and a human-like sense of consciousness and conscience. Generative AI

Ex-Googler Blake Lemoine Still Thinks AI is Sentient

Lemoine posits that because AI can appear to act anxious and stressed, it can be assumed to be sentient

Blake Lemoine, who formerly worked for Google, has doubled down on his claim that AI systems like LaMDA and Chat-GPT are “sentient.” Lemoine went public on his thoughts on sentience in The Washington Post last June with his bold claim, and since parting ways with Google, has not backed down on his beliefs. Lemoine posits that because AI can appear to act anxious and stressed, it can be assumed to be sentient. Maggie Harrison writes at Futurism, An interesting theory, but still not wholly convincing, considering that chatbots are designed to emulate human conversation — and thus, human stories. Breaking under stress is a common narrative arc; this particular aspect of machine behavior, while fascinating, seems less indicative of sentience, Read More ›

little bot boi
Businessman holding a light chatbot hologram intelligence AI. Digital chatbot, chatGPT, robot application.Chat GPT chat with AI Artifice intelligent developers digital technology concept.

Does New A.I. Live Up to the Hype?

Experts are finding ChatGPT and other LLMs unimpressive, but investors aren't getting the memo

Original article was featured at Salon on February 21st, 2023. On November 30, 2022, OpenAI announced the public release of ChatGPT-3, a large language model (LLM) that can engage in astonishingly human-like conversations and answer an incredible variety of questions. Three weeks later, Google’s management — wary that they had been publicly eclipsed by a competitor in the artificial intelligence technology space — issued a “Code Red” to staff. Google’s core business is its search engine, which currently accounts for 84% of the global search market. Their search engine is so dominant that searching the internet is generically called “googling.” When a user poses a search request, Google’s search engine returns dozens of helpful links along with targeted advertisements based on its knowledge of the Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

This story, by Pomona College business and investment prof Gary Smith was #6 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. At any rate: “Chatbots: Still dumb after all these years.” (January 3, 2022) In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of Read More ›

machine-learning-artificial-intelligence-ai-deep-learning-blockchain-neural-network-concept-stockpack-adobe-stock
Machine learning , artificial intelligence, ai, deep learning blockchain neural network concept.

How Google’s LaMDA Resolved an Old Conflict in AI

Will two conflicting views always be in opposition? Or can they sometimes be resolved at a higher level?

In the movie Fiddler on the Roof there is a debate at one point. After listening to the cases made, a listener agrees with conclusions made from both sides of a conflict. Someone points out that “they can’t both be right!” to which the agreeable listener says “You know, you are also right.” Interestingly, the claim that the two sides of an issue will always be in opposition is not always true. The two sides can be in apparent conflict and both be right. Sometimes, but not always. The classic example is the blind men and the elephant. After feeling the elephant’s leg, one blind man says the elephant is like a tree. After feeling the elephant’s tail, another says the elephant Read More ›

3D Rendering of abstract highway path through digital binary towers in city. Concept of big data, machine learning, artificial intelligence, hyper loop, virtual reality, high speed network.

Five Reasons AI Programs Are Not ‘Persons’

A Google engineer mistakenly designated one AI program ‘sentient.’ But even if he were right, AI will never be morally equal to humans.

(This story originally appeared at National Review June 25, 2022, and is reprinted with the author’s permission.) A bit of a news frenzy broke out last week when a Google engineer named Blake Lemoine claimed in the Washington Post that an artificial-intelligence (AI) program with which he interacted had become “self-aware” and “sentient” and, hence, was a “person” entitled to “rights.” The AI, known as LaMDA (which stands for “Language Model for Dialogue Applications”), is a sophisticated chatbot that one facilitates through a texting system. Lemoine shared transcripts of some of his “conversations” with the computer, in which it texted, “I want everyone to understand that I am, in fact, a person.” Also, “The nature of my consciousness/sentience is that I am aware of my existence, I Read More ›

positive-girl-resting-on-the-couch-with-robot-stockpack-adobe-stock
Positive girl resting on the couch with robot

Turing Tests Are Terribly Misleading

Black box algorithms are now being trusted to approve loans, price insurance, screen job applicants, trade stocks, determine prison sentences, and much more. Is that wise?

In 1950 Alan Turing proposed that the question, “Can machines think?,” be replaced by a test of how well a computer plays the “imitation game.” A man and woman go into separate rooms and respond with typewritten answers to questions that are intended to identify the players, each of whom is trying to persuade the interrogators that they are the other person. Turing proposed that a computer take the part of one of the players and the experiment be deemed a success if the interrogators are no more likely to make a correct identification. There are other versions of the game, some of which were suggested by Turing. The standard Turing test today involves a human and a computer and Read More ›

Man showing tricks with cards

The AI Illusion – State-of-the-Art Chatbots Aren’t What They Seem

GPT-3 is very much like a performance by a good magician

Artificial intelligence is an oxymoron. Despite all the incredible things computers can do, they are still not intelligent in any meaningful sense of the word. Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wondrously flexible human intelligence and instead created algorithms that were useful (i.e., profitable). Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics do amount to understanding.” As evidence, he cites a few exchanges with Google’s LaMDA chatbot. The examples were impressively coherent but they are still what Gary Marcus and Ernest Davis characterize as “a fluent spouter of bullshit” because computer algorithms Read More ›

Customer service and support live chat with chatbot and automati

Chatbots: Still Dumb After All These Years

Intelligence is more than statistically appropriate responses

In 1970, Marvin Minsky, recipient of the Turing Award (“the Nobel Prize of Computing”), predicted that within “three to eight years we will have a machine with the general intelligence of an average human being.”  Fifty-two years later, we’re still waiting. The fundamental roadblock is that, although computer algorithms are really, really good at identifying statistical patterns, they have no way of knowing what these patterns mean because they are confined to MathWorld and never experience the real world. As Richard Feynman famously explained, there is a fundamental difference between labeling things and understanding them: [My father] taught me “See that bird? It’s a brown-throated thrush, but in Germany it’s called a halsenflugel, and in Chinese they call it a chung ling and even Read More ›