Richard W. Stevens
Are Chatbots Biased? The Research Results Are In
The results are obvious and dramatic. Inject the preferred training materials and the chatbot will “believe” whatever the post-trainer intendedPeople have noticed political biases in artificial intelligence (AI) chatbot systems like ChatGPT, but researcher David Rozado studied 24 large language model (LLM) chatbots to find out. Rozado’s (preprint) paper, “The Political Preferences of LLMs,” delivers open access findings from very recent research, and declares: When probed with questions/statements with political connotations, most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints. The Chatbots’ Landslide of Opinion As reported in the New York Times, the paper restates that “most modern conversational LLMs when probed with questions with political connotations tend to generate answers that are … left-leaning viewpoints.” Using the verb “tend to” makes the conclusion appear tepid. The Read More ›
EU’s Massive New AI Law Won’t Stop Worst-Case Systems
The Act is drafted using legal language that confers broad additional power to governmentsCyber Plagiarism: When AI Systems Snatch Your Copyrighted Images
Outright copying of others’ images may put system’s owners in legal jeopardy. Let's look at U.S. legal decisionsHuman Impersonation AI Must Be Outlawed
I didn't used to think that AI systems could threaten civilization. Now I do.Facebook and Instagram Allegedly Hook Youngsters with Dopamine Triggering Tactics
You Can’t Always Be Happy
Our dopamine system both excites and tames pleasureNight Shift: The Brain’s Extraordinary Work While Asleep
Lie down, close your eyes, lose consciousness, and the brain undertakes the heavy lifting that sleep demands.Congress Boosts “Kill Switch” Technology to Control Drivers
Federal agency power poised to extend to your every move.Lawsuit Champions Human Creativity Over AI Mimicry
Copyright laws can protect against sophisticated plagiarism.Authors Guild Sues OpenAI For Unlawful Copying of Creative Works
Did ChatGPT make physical copies of copyrighted books and articles?Inside the Mind of a Rock ‘n’ Roll Drummer
Delving into the thrilling, demanding world of professional drumming and the mind-body communication it requiresHow a Toddler in a Toy Store Refutes Materialism
This everyday observation yields insight into a fundamental truthI’m a magnet for materialists. I often get into discussions with people who tell me that the universe is nothing but matter and energy. These folks believe in materialism. They say I’m nutty and wrong to think there is anything else. Something like: “Silly theist! Gods are for kids!” Let’s follow that thought. A grandparent of 11 humans, I’ve journeyed with their parents through the young ones’ toddlerhood many times. There’s a lot to learn about reality from toddlers’ learning and growing. It leads to understanding Toddler Truth. Take a toddler to a game arcade, a toy store, or another kid’s house to play. There’s one thing you can count on hearing: “I want that!” We parents start tuning out Read More ›
Postmodernism’s Steady Deconstruction of Reality
How can we find truth when nothing is reliable?Sometimes, you just have to try using college professors’ ideas in the real world. One such idea is “postmodernism.” Applied to communications, postmodernism teaches that whenever we read a written text, we should not try to discover what the writer intended. Instead of looking for an objective “meaning,” we should experience what the text means to us personally. The idea goes further, urging us to start by disbelieving the text and doubting our interpretations of it, too. People with the postmodern “deconstructionist” view say, “every text deconstructs” itself, and “every text has contradictions.” Deconstruction means “uncovering the question behind the answers already provided in the text.” Standing upon the ideas of the deconstructionist guru, Jacques Derrida, and his followers, one Read More ›
Making Sense of the Warhol v. Goldsmith Supreme Court Case
Lawyer Richard W. Stevens sheds light on a recent groundbreaking court case that has implications for generative AI and copyright issuesHere is an excerpt of the transcript from a recent Mind Matters podcast episode, which you can listen to in full here. Lawyer and Walter Bradley Center Fellow Richard W. Stevens sat down with Robert J. Marks to discuss a Supreme Court Case regarding AI and copyright issues. Stevens helps us understand more of what the case is about and what’s at stake. For more on this, read about the court case’s conclusion here, as well as Marks’s commentary from Newsmax. Richard Stevens: So to boil this down, the situation was this. A woman by the name of Lynn Goldsmith, a professional photographer, took a photo of the musician named Prince. Later, Andy Warhol was paid to produce an orange Read More ›
Lawyer Hammered for Using ChatGPT
Court record system proceeded to block access to sloppy lawyering and AI catastropheNew York Times reporters watched the hearing in federal district court in New York on June 8, 2023, which they then described: In a cringe-inducing court hearing, a lawyer who relied on A.I. to craft a motion full of made-up case law said he “did not comprehend” that [ChatGPT] could lead him astray. Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations – The New York Times (nytimes.com) The reporters got most of it right but even they erred. The lawyer involved did not write a “motion,” he filed a sworn declaration opposing a motion to dismiss. The difference matters: Declarations are under oath, so the lawyer swore to the truth of ChatGPT lies. Looking at the actual court Read More ›
Let’s Apply Existing Laws to Regulate AI
No revolutionary laws needed to fight harmful botsIn a recent article, Professor Robert J. Marks reported how artificial intelligence (AI) systems had made false reports or gave dangerous advice: Prof. Marks suggested that instead of having government grow even bigger trying to “regulate” AI systems such as ChatGPT: How about, instead, a simple law that makes companies that release AI responsible for what their AI does? Doing so will open the way for both criminal and civil lawsuits. Strict Liability for AI-Caused Harms Prof. Marks has a point. Making AI-producing companies responsible for their software’s actions is feasible using two existing legal ideas. The best known such concept is strict liability. Under general American law, strict liability exists when a defendant is liable for committing an action Read More ›
Panic Propaganda Pushes Surrender to AI-Enhanced Power
The hype over AI's significance makes us more vulnerable to itCan you believe it? USA Today, the national news outlet, on May 4, 2023, declared (italics added): It’s the end of the world as we know it: ‘Godfather of AI’ warns nation of trouble ahead. Before digging out and playing your 1987 REM album, ask yourself: Is this headline true – and what do we do now? The USA Today article mitigates the doom timeframe from imminent to someday in paragraph one (italics added): One of the world’s foremost architects of artificial intelligence warned Wednesday that unexpectedly rapid advances in AI – including its ability to learn simple reasoning – suggest it could someday take over the world and push humanity toward extinction. Within a day, the Arizona Republic ran Read More ›
20 Ways AI Enables Criminals
If you cannot believe your eyes and ears, then how can you protect yourself and your family from crime?As reported recently and relayed in this publication, a mom in Arizona described how criminals called her to say they were holding her daughter for ransom and used artificial intelligence (AI) to mimic perfectly her daughter’s voice down to the word choices and sobs. Only because the mom found her daughter safe in her home could she know the call was a scam. Meanwhile, despite efforts to limit ChatGPT’s excursions into the dark side of human perversity, the wildly famous bot can be persuaded to discuss details of sordid sexuality. In one experiment with Snapchat’s MyAI chatbot, an adult pretending to be a 13-year-old girl asked for advice about having sex for the first time – in a conversation in Read More ›
Can Professor Turley Sue ChatGPT for Libel?
The world wide web of reputation destruction is hereIsn’t there a law against falsely accusing people of serious crimes or misconduct and then publishing damaging lies to the world? Yes. For centuries in English-speaking countries, the victim of such lies could sue the false accuser in civil court for libel per se. Nowadays, libel and its oral statement cousin, slander, are grouped together as defamation. Under American law, it isn’t easy to bring and win a lawsuit even when your case seems strong, but at least the law provides some recourse for defamation. How about when the false accuser is ChatGPT? Jonathan Turley, the nationally known George Washington University law professor and commentator, woke up one morning to discover: ChatGPT falsely reported on a claim of sexual harassment that was never made Read More ›