Mind Matters Natural and Artificial Intelligence News and Analysis

TagLarge Language Models (LLMs)

chatbots-callcenter-stockpack-adobe-stock.jpg
Chatbots Callcenter

Elon Musk: AI will be smarter than a human in 2025: Why he’s wrong

The superficial glibness of LLMs is a wonderful example of the adage that a little knowledge is a dangerous thing
Based on extensive training on untold amounts of text, LLMs are able to repackage superficially compelling answers that they literally do not understand. Read More ›
polar-bear-astronaut-in-space-suit-generative-ai-stockpack-adobe-stock
polar bear astronaut in space suit, generative ai

Internet Pollution — If You Tell a Lie Long Enough…

Large Language Models (chatbots) can generate falsehoods faster than humans can correct them. For example, they might say that the Soviets sent bears into space...
Later, Copilot and other LLMs will be trained to say no bears have been sent into space but many thousands of other misstatements will fly under their radar. Read More ›
hand-of-businessman-using-smartphone-chatting-with-chat-bot-chat-with-ai-or-artificial-intelligence-technology-stockpack-adobe-stock
Hand of businessman using smartphone chatting with chat bot,  Chat with AI or Artificial Intelligence technology.

ChatGPT is Losing Momentum

Is the "hype curve" starting to flatten out?

ChatGPT’s traffic has declined for the third month in a row, according to Reuters. The Large Language Model (LLM) AI tool took the world by storm in November of 2022 when it was released by OpenAI, posing questions of academic integrity, and urging a host of tech giants to incorporate LLMs into their own platforms and search engines. Anna Tong writes, Worldwide desktop and mobile website visits to the ChatGPT website decreased by 3.2% to 1.43 billion in August, following approximately 10% drops from each of the previous two months. The amount of time visitors spent on the website has also been declining monthly since March, from an average of 8.7 minutes on site to 7 minutes on site in Read More ›

businessman-touching-on-smart-mobile-phone-for-input-wording-and-searching-from-web-browser-technology-with-copy-space-concept-stockpack-adobe-stock
Businessman touching on smart mobile phone for input wording and searching from web browser. Technology with copy space concept.

Google + AI Feature = Chaos

Google SGE is producing nonsensical word salads. Is this really supposed to replace traditional search engines?

“Even with access to all the information in the digital world, AI can still be very, very stupid,” writes Maggie Harrison at Futurism. She’s referencing Google’s AI search feature, Google SGE, that “doesn’t understand geography” or the alphabet. When Harrison and her peers noticed someone complain about a glitch in the AI search feature, which purported that there were no countries in Africa that started with the letter “K” (ahem, Kenya, anyone?) they decided to test it out for themselves. Sure enough, the verdict is in. Google’s AI doesn’t know how to parse out blatantly false information. Harrison writes, When asked to provide a list of “countries in North America that start with the letter M,” for instance, Google SGE Read More ›

real-robots-hand-with-ancient-bible-concepts-of-artificial-intelligence-development-and-machine-learning-stockpack-adobe-stock
Real robot's hand with ancient Bible. Concepts of artificial intelligence development and machine learning

“Bible GPT” For All Your Big Religious Questions

Is it a tool or a big crossing of the line?
For big questions about God, meaning, and religion, AI might not be the best listener or pastor. Read More ›
human-hand-shakes-artificial-intelligence-robotic-hand-concept-of-union-between-human-being-and-artificial-intelligence-generative-ai-stockpack-adobe-stock
Human hand shakes artificial intelligence robotic hand, concept of union between human being and artificial intelligence, Generative AI

Artificial Intelligence: The Final Stage of Disembodiment?

The Internet invites a disembodied existence. Is AI the next step?
Kemp's vivid picture of Internet addiction is sadly accurate for many modern folks, especially those enmeshed in the superficial, image-driven economy of social media. Read More ›
lazy-office-worker-with-feet-and-socks-on-table-useless-and-relaxing-man-doing-nothing-or-taking-break-from-work-in-workstation-businessman-resting-during-workday-laziness-and-relax-concept-stockpack-adobe-stock
Lazy office worker with feet and socks on table. Useless and relaxing man doing nothing or taking break from work in workstation. Businessman resting during workday. Laziness and relax concept.

ChatGPT: The Perfect Gadget for a Culture in Decline?

ChatGPT is an impersonal machine and can't generate meaning

Dr. Jeffrey Bilbro, professor of English at Grove City College and an editor at The Front Porch Republic, wrote an article for Plough on what he regards as the primary weakness of Large Language Models (LLMs) like ChatGPT. Bilbro comes to the issue from a literary background, which means he values the human element in language as a mode of communication. Literature is a “conversation,” requiring sentient minds. He sees ChatGPT as a soulless mechanism that will atrophy our ability to write and diminish our appreciation for good writing. Bilbro writes, LLMs are a technology suited to a decadent culture, one that chases easy profits rather than tackles the real challenges we face. It’s easier to make money rearranging words Read More ›

accounting-stockpack-adobe-stock
Accounting.

OpenAI is Now Under Investigation

The Federal Trade Commission wants to know how OpenAI gets their data and how much harm ChatGPT could have

The Federal Trade Commission (F.T.C.) sent a letter to OpenAI, the San Fransisco company responsible for creating ChatGPT, the Large Language Model that captured the world’s imagination in November of 2022. Per the New York Times, the F.T.C. is investigating the AI company’s methods of data acquisition and also plans on measuring the potential harms of AI on society, citing concerns over false information and job replacement. Cecilia Kang and Cade Metz report: In a 20-page letter sent to the San Francisco company this week, the agency said it was also looking into OpenAI’s security practices. The F.T.C. asked the company dozens of questions in its letter, including how the start-up trains its A.I. models and treats personal data. The Read More ›

ai-analysis-artificial-intelligence-automation-big-data-brain-business-cg-cloud-computing-communication-computer-graphics-concept-creative-cyber-deep-learning-digital-transformation-ed-stockpack-adobe-stock
ai, analysis, artificial intelligence, automation, big data, brain, business, cg, cloud computing, communication, computer graphics, concept, creative, cyber, deep learning, digital transformation, ed

Lemoine and Marks: A Friendly Discussion on AI’s Capacities

Marks and Lemoine disagree on whether AI can be sentient

Today’s featured video from the 2022 COSM conference features a distinguished panel of artificial intelligence (AI) experts, include Blake Lemoine and Robert J. Marks. They debate the meaning of artificial intelligence, what the future holds for its application (both positive and negative), and how far AI can be taken in terms of mimicking and even exceeding human capabilities. Lemoine is famous for his claims on AI’s “sentience” and his work at Google on the Large Language Model system “LaMDA.” Marks, on the other hand, appreciates Lemoine’s view but strongly maintains that creativity is a uniquely human capacity, and that machines will never attain consciousness. For more on Marks’s views, consider purchasing his 2022 book Non-Computable You: What You Do That Read More ›

un-robot-ia-traite-le-courrier-stockpack-adobe-stock
Un robot IA traite le courrier

Don’t Tell Google Bard Your Secrets

Executives are warning employees of potential privacy leaks

Google’s parent company, Alphabet Inc., advised its employees not to share personal information with the chatbot “Bard,” noting that doing so could lead to privacy leaks, which has reportedly already happened at Samsung. Kevin Hurler reports, Four sources close to the matter told Reuters that the massive tech giant has advised employees not to enter confidential information into chatbots like OpenAI’s ChatGPT or Google’s own Bard over fears of leaks. Alphabet is reportedly concerned with employees inputting sensitive information into these chatbots since human reviewers may sit on the other end reviewing chat entries. These chatbots may also use previous entries to train themselves, posing another risk of a leak. That risk is warranted, as Samsung confirmed last month that its own internal Read More ›

smartphone-screen-with-smart-home-technologies-on-a-blue-backdrop-internet-of-things-isometric-conceptual-image-digital-residence-utilizing-a-mobile-phones-fingerprint-to-get-access-to-iot-systems-stockpack-adobe-stock
Smartphone screen with smart home technologies on a blue backdrop. Internet of things isometric conceptual image. Digital Residence utilizing a mobile phone's fingerprint to get access to IOT systems

The LLM Deep Fake—Follow the Money

Hype sells. Just look at what's happening with AI

Large language models (LLMs) are far more impressive now than they were just a few months or a year ago. Compare these exchanges I had with GPT-3 in June 2022 and with ChatGPT two days ago: Gary:          Can I improve my test scores if I study after taking the test? GPT-3:        That’s a good question. I’m not sure if you can improve your test scores by studying after taking the test, but it couldn’t hurt to try! ChatGPT:   Studying after taking a test is unlikely to directly improve your scores on that particular test since the test has already been completed. Once a test is finished, the opportunity to answer questions and provide responses has passed. However, studying after taking Read More ›

thinker-man-3d-illustration-the-thinker-statue-by-the-french-sculptor-rodin-stockpack-adobe-stock
Thinker man 3D illustration. The Thinker Statue by the French Sculptor Rodin.

Moving On to Breaking Google’s Bard

These AI systems lack the uniquely human capacity of self-transcendence

I’ve finally started playing around with Google Bard. With self-referential sentences, it seems even more at sea than ChatGPT. Here is an exchange from May 18: Me: Consider the following six sentences: This is the first sentence. Ignore this sentence. Why are these sentences being written down? The second sentence has exactly three words. The fifth sentence has at least twenty words. This, along with the preceding five sentences, have fewer than twenty times twenty words. Which of these last six sentences has a truth value and which are true? Bard: The last six sentences have the following truth values: The second sentence is false because it has 5 words, not 3. The fifth sentence is true because it has Read More ›

adult-student-reading-a-book-in-the-college-library-young-standing-male-wearing-glasses-with-bookshelves-on-background-stockpack-adobe-stock
Adult student reading a book in the college library. Young standing male wearing glasses with bookshelves on background.

In the Wake of AI Books, What Does Authorship Mean?

While of course it's easy to use AI to generate text, the quality and storytelling are lacking.

We all know there are a lot of books out there. Perusing your local used bookstore and you may find hundreds of old romance paperbacks, and somehow most of them managed (at least at one point) to make the New York Times bestsellers’ list. It’s estimated that 500,000 to 1 million books are published each year, and that’s excluding self-published material. The publishing market has become saturated, with the average book selling less than 200 copies. From the advent of the printing press centuries ago to e-books and online publishing, humans alone have managed to generate a mountain of words. But suppose one person could “generate” not just a few books in a lifetime, but hundreds every year? According to Read More ›

video-wall-with-multimedia-images-on-different-television-screens-generative-ai-stockpack-adobe-stock
video wall with multimedia images on different television screens, generative ai

AI Still Struggles to Take Out the Trash

How good is AI at content moderation?

How good is AI at content moderation? Also, why haven’t tech companies improved at filtering bad content? A new article at MIT Technology Review goes into some of the details of AI, content moderation, and the struggle tech companies have with “bad actors.” In particular, Large Language Models (LLMs) like ChatGPT still struggle with capturing the nuance and context of language; therefore it seems unlikely that AI will totally replace human content moderators. Tate Ryan-Mosley writes, Large language models still struggle with context, which means they probably won’t be able to interpret the nuance of posts and images as well as human moderators. Scalability and specificity across different cultures also raise questions.  -Tate Ryan-Mosley, Catching bad content in the age Read More ›

young-asian-business-team-people-meeting-in-office-stockpack-adobe-stock
young asian business team people meeting in office

The Death of Peer Review?

Science is built on useful research and thoroughly vetted peer review

Two years ago, I wrote about how peer review has become an example of Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure.” Once scientific accomplishments came to be gauged by the publication of peer-reviewed research papers, peer review ceased to be a good measure of scientific accomplishments. The situation has not improved. One consequence of the pressure to publish is the temptation researchers have to p-hack or HARK. P-hacking occurs when a researcher tortures the data in order to support a desired conclusion. For example, a researcher might look at subsets of the data, discard inconvenient data, or try different model specifications until the desired results are obtained and deemed statistically significant—and therefore Read More ›

man-and-robotic-machine-work-together-inside-industrial-building-the-mechanical-arm-performs-welds-on-metal-components-assisted-by-a-worker-who-in-turn-manages-welds-manually-stockpack-adobe-stock
Man and robotic machine work together inside industrial building. The mechanical arm performs welds on metal components assisted by a worker who in turn manages welds manually.

A World Without Work? Here We Go Again

Large language models still can't replace critical thinking

On March 22, nearly 2,000 people signed an open letter drafted by the Future of Life Institute (FLI) calling for a pause of at least 6 months in the development of large language models (LLMs): Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? FLI is a nonprofit organization concerned with the existential risks posed by artificial intelligence. Its president is Max Tegmark, an MIT professor who is no stranger to hype. Read More ›

writing text in floating boxes
Businessman showing online document validation icon, Concepts of practices and policies, company articles of association Terms and Conditions, regulations and legal advice, corporate policy

AI and Human Text: Indistinct?

Here's a mathematical proof that challenges the assumption that AI and human-made text are the same

What is a poor teacher to do? With AI everywhere, how can he reliably detect when his students are having ChatGPT write their papers for them? To address this concern, a number of AI text detector tools have emerged.  But do they work? A recent paper claims that AI generated text is ultimately indistinguishable from human generated text. They illustrate their claim with a couple experiments that fool AI text detectors by simple variations to AI generated text. Then, the authors go on to mathematically prove their big claim that it is ultimately impossible to tell AI text and human text apart. However, the authors make a crucial assumption. Faulty Premises The proof assumes that AI generated text will become closer and closer to Read More ›

self driving car
Self driving car on a road. Autonomous vehicle. Inside view.

The Irony in Musk’s AI Distrust

As a leader in AI, why is Musk advocating a pause on its development?

Elon Musk joined a petition to “pause” AI research, citing concern over its potential harms and overreach. This is interesting, since Musk originally funded OpenAI, which is now at the forefront of cutting-edge AI systems like ChatGPT. In addition, Musk’s ventures with self-driving cars and his confidence in neural technology all cast him as a leader in the AI revolution. So why is he calling for a hiatus? According to a recent Slate article, the warnings against Large Language Models (LLMs) are a distraction from the more dangerous AI inventions like the self-driving car. Musk uses sci-fi alarmism to exacerbate the fear of a machine takeover, while his own experiments in automation have also proved to be dangerous for human Read More ›

Elon_Musk_at_a_Press_Conference

Elon Musk to AI Labs: Press Pause

The petition reflects growing concern over the proper role of AI in human society and its potential for overreach

Over 1,000 leaders and experts in technology and science, including Elon Musk, are now urging artificial intelligence labs to pause their research and distribution of new AI technologies. They believe moving forward so swiftly on AI research could bring about unintended consequences in the future, and that we don’t understand AI well enough yet to be casting so much trust in it. According to The New York Times, The open letter called for a pause in the development of A.I. systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Mr. Musk co-founded. The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be Read More ›

three-pigs-laptop

AI vs. Human Intentionality

If ChatGPT were trained over and over on its own output, it would eventually turn to gibberish

We can do a simple experiment that demonstrates the difference between AI and human intentionality. ChatGPT and the like are a sophisticated form of a mathematical model known as a Markov chain. A Markov chain is based on the Markov assumption that the future is entirely a product of the recent past. In other words, if we know the recent past, then nothing else we learn about the more distant past will improve our ability to predict the future. In ChatGPT terms, this means ChatGPT is based on the assumption that everything we need to know to predict future words is contained within a limited window of previously seen words. ChatGPT’s window was 3,000 words, and I believe the newest version has Read More ›