Mind Matters Natural and Artificial Intelligence News and Analysis

TagConsciousness

creative-conversion-of-woman-holding-a-shard-of-broken-mirror-and-eyes-from-another-exposure-artistic-conversion-stockpack-adobe-stock
Creative conversion of woman holding a shard of broken mirror and eyes from another exposure artistic conversion

The Real Danger in AI

We are highly susceptible to suggestions about what an image means

By Jeff Gardner The threat that artificial intelligence (AI) poses to us has been dominating the news cycle. Exactly what AI will do to us is hard to predict — it hasn’t happened yet. But some, like Elon Musk, worry that AI will be used primarily to peddle lies to us. Musk is right, but not because AI is the next thing in fake news. “Fake news” is already here, and it’s not composed of made-up stories. It is someone’s opinion being passed off as the story, the “facts” of the event. With fake news, the events are real, but the assigned meaning, the “frame” as it is called in the media, is manufactured. The Problem AI’s danger to us Read More ›

newborn-baby-holding-mothers-hand-stockpack-adobe-stock
Newborn baby holding mother's hand.

Abortion: Switching Off a Computer?

This is the kind of thinking that results from rejecting the intrinsic moral value of human life

This is the kind of thinking that results from rejecting the intrinsic moral value of human life. Princeton University bioethicist Peter Singer — who is most famous for secularly blessing infanticide — just compared abortion to turning off a computer. He first claims that should an AI ever become “sentient,” turning it off would be akin to killing a being with the highest moral value (which for him, as described below, need not be human). From the Yahoo News story: We asked internationally renowned moral philosopher Professor Peter Singer whether AI should have human rights if it becomes conscious of its own existence. While Professor Singer doesn’t believe the ChatGPT operating system is sentient or self-aware, if this was to change he argues it should be given some moral status. Read More ›

Unlocking latest smartphone with biometric facial identification scan

AI is Closer Than You Think

Most of us carry powerful AI in our pockets every single day

Sometimes AI seems a bit of a niche idea, relegated to dystopian prophecies or sentient robots. But AI is much more pervasive and influential in our present world in more ways than we might assume. Oxford mathematician John Lennox reminds us in this recent podcast episode that our society teems with AI. Lennox commented, Now, the final example I would give you is the fact that we’re all involved in AI. That is any of us who own a smartphone, it’s tracking us all the time. What many of us don’t realize is that, for example, we make a purchase at Amazon. A few days later, we’ll get a pop-up saying, people that bought this book were interested in that Read More ›

cyborg-hologram-watching-a-subway-interior-3d-rendering-stockpack-adobe-stock
Cyborg hologram watching a subway interior 3D rendering

Should We Shut the Lid on AI?

The real danger posed by AI is not its potential. It is the lack of ethics

By John Stonestreet & K. Leander Recently, a number of prominent tech executives, including Elon Musk, signed an open letter urging a 6-month pause on all AI research. That was not enough for AI theorist Eliezer Yudkowsky. In an opinion piece for TIME magazine, he argued that “We Need to Shut it All Down,” and he didn’t mince his words: Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI … is that literally everyone on Earth will die. Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’ Using a tone dripping with panic, Yudkowsky even suggested that countries like the U.S. should be willing Read More ›

John Lennox

John Lennox: AI and Ethics

How can we program ethics into AI? John Lennox asks

In last week’s podcast, Oxford mathematician John Lennox talked about AI surveillance and the danger of misusing the technology for purposes of suppression. He said, But there’s a downside because facial recognition technology is being used at the moment in certain parts of the world to invade the privacy, not only of individuals, but of whole people groups and actually control them and suppress them. Now, I mentioned that example to say that very rapidly AI, narrow AI raises huge ethical questions. Now remember, this is the stuff that’s actually working, self-driving cars, autonomous vehicles, AI system built in there, but you have to build into it some kind of ethical decision making. If the car sensors pick up an Read More ›

human-body-with-glowing-neurons-visualization-generative-ai-illustration-stockpack-adobe-stock
Human body with glowing neurons visualization. Generative AI illustration

New Routledge Book on AI: It Won’t Take Us Over

The authors argue that, regardless of the benefits AI might provide in the future, it will never emulate the complex human neurocognitive system.

A new book, Why Machines Will Never Rule the World, amplifies human exceptionalism and critiques the view that artificial intelligence will someday replace human beings. According to authors Jobst Landgrebe and Barry Smith, much of life and work can only be adequately navigated successfully with natural, not computerized, intelligence. They give two reasons for thinking that AI will never exceed human ingenuity: Echoing similar sympathies as Robert J. Marks in his book Non-Computable You: What You Do That Artificial Intelligence Never Will, Landgrebe and Smith argue that the concept of artificial general intelligence is mathematically impossible. A part of the book’s summary reads: Landgrebe and Smith show how a widespread fear about AI’s potential to bring about radical changes in Read More ›

people-standing-around-big-data-cloud-stockpack-adobe-stock
people standing around big data cloud

Godfather of AI: I Regret What I’ve Done

The AI arms race will blur fact and fiction, says Geoffrey Hinton

Geoffrey Hinton, often regarded as the “godfather of AI,” sat down with The New York Times and shared his concerns over the new arms race in artificial intelligence. Hinton was instrumental in AI research and is considered a pioneer in the field. Hinton revealed his departure from Google, where he worked for over a decade. Since the debut of ChatGPT-3 in November of 2022, Google has been struggling to maintain its longtime search engine dominance, trying to infuse an AI chatbot into its own search feature, alongside competitors like Microsoft. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton said. Hinton thinks that the proliferation of artificially contrived images, text, etc., will Read More ›

fire
fire flames with sparks on a black background, close-up

Google CEO: AI is More Significant Than the Invention of Fire

Pichai compared the invention of AI to the creation of fire, claiming it surpassed even great leaps in technology like electricity

The Google CEO Sundar Pichai appeared on a 60 Minute segment to discuss state of the art AI, Google’s Bard, and what AI means to humanity. Pichai compared the invention of AI to the creation of fire, claiming it surpassed even great leaps in technology like electricity. When asked the reason, he replied, “It gets to the essence of what intelligence is.” See the clip below: Pichai also discussed some of the dangers posed by AI, such as the potential proliferation of misinformation and false images. ChatGPT, for all its dexterity, still makes mistakes, as Google’s Bard does too, and concern over the ambiguity over the reliability of photographic images will only grow as AI develops. Of course, Pichai may Read More ›

modern-apartment-block-at-dusk-stockpack-adobe-stock
Modern apartment block at dusk

Her, Part Two

What happens when you’re dating the AI secretary

Last time, we began talking about the movie Her, a story of a man falling in love with his AI and compared it to the abysmal season three of the Orville. Unlike the Orville, which insisted that the viewer take the romance between the robot and the human seriously, Her treats the subject as a what-if scenario, playing the whole situation straight and letting the viewer draw their own conclusions. Theodore had just finished uploading the AI onto his computer, which called itself Samantha, and was impressed by how human-like the operating system seemed. Samantha begins organizing Theodore’s computer and helping him around the office, but it doesn’t take long for a romantic relationship to develop between them. When Theodore Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

self driving car
Self driving car on a road. Autonomous vehicle. Inside view.

The Irony in Musk’s AI Distrust

As a leader in AI, why is Musk advocating a pause on its development?

Elon Musk joined a petition to “pause” AI research, citing concern over its potential harms and overreach. This is interesting, since Musk originally funded OpenAI, which is now at the forefront of cutting-edge AI systems like ChatGPT. In addition, Musk’s ventures with self-driving cars and his confidence in neural technology all cast him as a leader in the AI revolution. So why is he calling for a hiatus? According to a recent Slate article, the warnings against Large Language Models (LLMs) are a distraction from the more dangerous AI inventions like the self-driving car. Musk uses sci-fi alarmism to exacerbate the fear of a machine takeover, while his own experiments in automation have also proved to be dangerous for human Read More ›

view-of-the-great-salt-lake-at-sunset-at-antelope-island-state-park-utah-stockpack-adobe-stock
View of the Great Salt Lake at sunset, at Antelope Island State Park, Utah

Should Great Salt Lake Have Rights?

The nature rights movement keeps making inroads into establishment thinking — and people keep ignoring the threat

The nature rights movement keeps making inroads into establishment thinking — and people keep ignoring the threat. The concept has now been advocated in a major opinion piece in the New York Times. Utah’s Great Salt Lake is shrinking — a legitimate problem worthy of focused concern and remediation. Utah native and Harvard Divinity School’s writer-in-residence Terry Tempest Williams — who focuses on “the spiritual implications of climate change” — makes a strong case that the lake is in trouble. A Conservationist Approach Her proposed remedies reflect a proper conservationist approach worthy of being debated: Scientists tell us the lake needs an additional one million acre-feet per year to reverse its decline, increasing average stream flow to about 2.5 million acre-feet per year. A Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

nanobot
Cell repairing nanobot technology, illustration

Pre-order Immortality Now (It’s Only 8 Years Away, Apparently)

A Google engineer predicts the "singularity" is coming and that we should get on board

Ray Kurzweil, a former Google engineer, thinks that humanity is a mere “eight years away” from achieving immortality. No, he’s not a spiritual leader predicting the eschaton. He’s not telling you to seek union with God and achieve immortality the old-fashioned way. He thinks we’ll be able to live forever via age-reversing “nanobots.” These “tiny robots” will correct damaged cells and make us immune to disease, thus leading to radically increased human longevity. Stacy Liberatore writes at Daily Mail, Now the former Google engineer believes technology is set to become so powerful it will help humans live forever, in what is known as the singularity. Singularity is a theoretical point when artificial intelligence surpasses human intelligence and changes the path Read More ›

reaching out ai
White cyborg finger about to touch human finger 3D rendering

Robert Marks at The Daily Caller

Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes

Robert J. Marks wrote a piece at The Daily Caller this week on artificial intelligence, ChatGPT, and the manifold problems of new AI systems like Google’s Bard and older ones such as Amazon’s Alexa. Despite the confidence in new AI coming from Big Tech executives, it makes quite glaring mistakes, although Marks believes AI has its genuine uses and benefits. Snapchat’s chatbot “My AI” gave advice about how to hide the smell of pot and alcohol to someone posing as a disgruntled teenager. Microsoft’s Bing bot professed its love for a tech journalist. A Google app made egregiously racist errors. ChatGPT is also politically biased despite claiming neutrality. Marks writes, Many warn of the future dangers of artificial intelligence. Many Read More ›

hiking
Hiking team people helping each other friend giving a helping hand while climbing up on the mountain rock adventure travel concept of friendship support trust teamwork success.

GPT-4: Signs of Human-Level Intelligence?

Competence and understanding matter just as much if not more than mere "intelligence"

You’ve heard about GPT-3, but how about GPT-4? OpenAI has publicly released the new AI program, and researchers have already claimed that it shows “sparks” of human intelligence, or artificial general intelligence (AGI). Maggie Harrison writes at Futurism, Emphasis on the “sparks.” The researchers are careful in the paper to characterize GPT-4’s prowess as “only a first step towards a series of increasingly generally intelligent systems” rather than fully-hatched, human-level AI. They also repeatedly highlighted the fact that this paper is based on an “early version” of GPT-4, which they studied while it was “still in active development by OpenAI,” and not necessarily the version that’s been wrangled into product-applicable formation. -Maggie Harrison, Microsoft Researchers Claim GPT-4 Is Showing “Sparks” Read More ›

COSM2022-Nov10-174A0113-blake-lemoine-panel

Lemoine at COSM 2022: A Conversation on AI and LaMDA

Will AI ever become "sentient"?

Blake Lemoine, ex-Google employee and AI expert, sat down with Discovery Institute’s Jay Richards at the 2022 COSM conference last November. Together they discussed AI, Google, and how and why Lemoine got to where he is today. Lemoine famously claimed last year that LaMDA, Google’s breakthrough AI technology, had achieved sentience. Lemoine explains that many people at Google thought AI had the potential for sentience, but that such technology should not be made prematurely for fear of the negative impacts it could have on society. You can listen to their interesting and brief conversation in the video below, and be sure to see more sessions from the 2022 COSM conference featuring Lemoine and other leaders and innovators in technology on Read More ›

woman reading book
a woman is reading a book and holding coffee

ChatGPT and Personal Consciousness

AI vs. the human voice in literature and the arts

This week, Peter Biles, Writer & Editor for Discovery Institute’s Center for Science & Culture, wrote a piece for Salvo on ChatGPT and the uniqueness of the human voice in literature and the arts. Biles cites Christina Bieber Lake, professor of English at Wheaton College, from her book Beyond the Story: American Literary Fiction and the Limits of Materialism. Bieber Lake pushes back against the reductionistic worldview of Darwinistic materialism, appealing to the personal nature of the human being and the relationships we share together. Since a computer fails to practice personal consciousness, it also fails to create meaningful literature, which always involves two persons––one person speaking to another. Biles also cites Robert J. Marks’s essential book on the topic Read More ›

human rights
Circle of paper people holding hands on pink surface. Community, brotherhood concept. Society and support.

Love Thy Robot as Thyself

Academics worry about AI feelings, call for AI rights

Riffing on the popular fascination with AI (artificial intelligence) systems ChatGPT and Bing Chat, two authors in the Los Angeles Times recently declared: We are approaching an era of legitimate dispute about whether the most advanced AI systems have real desires and emotions and deserve substantial care and solicitude. The authors, Prof. Eric Schwitzgebel at UC Riverside, and Henry Shevlin, a senior researcher at the University of Cambridge, observed AI thinkers saying “large neural networks” might be “conscious,” the sophisticated chatbot LaMDA “might have real emotions,” and ordinary human users reportedly “falling in love” with chatbot Replika.  Reportedly, “some leading theorists contend that we already have the core technological ingredients for conscious machines.”  The authors argue that if or when Read More ›

hands-stockpack-adobe-stock
hands

Observing and Communing

What human art and literature do that AI can't

AI image generators like Midjourney or DALL-E are generally adept at capturing the accuracy of the human form. The concerns over copyright, job infringement, and general degradation of the visual arts via such AI are ongoing concerns for many artists and practitioners. However, a new New Yorker article by Kyle Chayka identifies a noticeable flaw in AI artwork: human hands. Missing the Big Picture Chayka begins by recalling an art class where he was asked to draw his own hand. It’s an assignment for beginners, and as behooves a novice, tempts the artist to focus more on the specific contours of the hand instead of the overall structure and form. The forest gets lost in the trees, so to speak. Read More ›