Mind Matters Natural and Artificial Intelligence News and Analysis

TagEmily M. Bender

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses

This story was #5 in 2022 at Mind Matters News in terms of reader numbers. As we approach the New Year, we are rerunning the top ten Mind Matters News stories of 2022, based on reader interest. In “Google dismisses engineer’s claim that AI really talked to him” (June 14, 2022), our News division looks at what happened when software engineer Blake Lemoine, now ex-Google, became convinced that the large language program he tended to was a person. Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is Read More ›

reaching toward chatbot
Chat bot Robot Online Chatting Communication Business Internet Technology Concept

Why We Should Not Trust Chatbots As Sources of Information

A linguist and an information theorist say that chatbots lack any awareness of the information they provide — and that matters

Linguist Emily M. Bender and information theorist Chirag Shah, both of the University of Washington, have a message for those who think that the chatbot they are talking to is morphing into a real person: No. Not only that but there are good reasons to be very cautious about trusting chatbots as sources of information, all the more so because they sound so natural and friendly. First, decades of science fiction, the authors point out, have taught us to expect computer scientists to develop a machine like that: However, we must not mistake a convenient plot device — a means to ensure that characters always have the information the writer needs them to have — for a roadmap to how Read More ›

ai-machine-learning-hands-of-robot-and-human-touching-on-big-data-network-connection-background-science-and-artificial-intelligence-technology-innovation-and-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Google Dismisses Engineer’s Claim That AI Really Talked to Him

The reason LaMDA sounds so much like a person is that millions of persons’ conversations were used to construct the program’s responses.

Google engineer Blake Lemoine was working with LaMDA (Language Model for Dialogue Applications), a large language program which motors through trillions of words on the internet to produce coherent answers using logic. Along the way, he convinced himself that the program is sentient: Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech. As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Read More ›

bangkok-thailand-25-aug-2020-men-hand-using-digital-tablet-for-search-information-on-google-wireless-smartphone-technology-with-intelligence-search-engine-stockpack-adobe-stock.jpg
Bangkok, Thailand 25 AUG 2020. Men hand using digital tablet for search information on Google.  Wireless Smartphone technology with intelligence search engine.

Google’s Leading AI Ethics Researcher Fired, Amid Controversy

Her research team targeted Google’s “cash cow”: advertising

Timnit Gebru, a leading AI ethics researcher, was fired from Google early this month under circumstances that have raised suspicions across the industry: On December 2, the AI research community was shocked to learn that Timnit Gebru had been fired from her post at Google. Gebru, one of the leading voices in responsible AI research, is known among other things for coauthoring groundbreaking work that revealed the discriminatory nature of facial recognition, cofounding the Black in AI affinity group, and relentlessly advocating for diversity in the tech industry. But on Wednesday evening, she announced on Twitter that she had been terminated from her position as Google’s ethical AI co-lead. “Apparently my manager’s manager sent an email [to] my direct reports Read More ›