Mind Matters Natural and Artificial Intelligence News and Analysis
water waves
Beautiful water waves -  Splashed water wave in clean blue water, clean filtered water ready for drinking

Why AI Can’t Really Filter Out “Hate News”

As Robert J. Marks explains, the No Free Lunch theorem establishes that computer programs without bias are like ice cubes without cold
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In Define information before you talk about it, neurosurgeon Michael Egnor interviewed engineering prof Robert J. Marks on the way information, not matter, shapes our world (October 28, 2021). In the first portion, Egnor and Marks discussed questions like: Why do two identical snowflakes seem more meaningful than one snowflake. Then they turned to the relationship between information and creativity. Is creativity a function of more information? Or is there more to it? And human intervention make any difference? Does Mount Rushmore have no more information than Mount Fuji? Does human intervention make a measurable difference? That’s specified complexity. Putting the idea of specified complexity to work, how do we measure meaningful information? How do we know Lincoln contained more information than his bust? In this episode, they address the hope that advanced AI could somehow recognize and filter out bias and hate — the problem is that bias is innate in programming.

This portion begins at 43:13 min. A partial transcript and notes, Show Notes, and Additional Resources follow.

Michael Egnor: Some people hope that artificial intelligence could filter out hate news. No, it’s not going to be able to filter out hate news without a bias from the programmer of what is hate news.

Michael Egnor

Robert J. Marks: And it is, I think, a firmly established fact through computer science theory — like the No Free Lunch Theorems — that you cannot build a computer program without an intention, without a bias.

Computer programs without bias are like ice cubes without cold. You just can’t have them. So we would expect intentionality in computer programs and artificial intelligence to always be programmed in by the computer programs.

Note: The No Free Lunch Theorems state that any one algorithm that searches for an optimal cost or fitness solution is not universally superior to any other algorithm. – Leon Fedden

Fedden tells us that the phrase “No Free Lunch” originated in the 19th century when workers could get free food at pubs provided that they bought a drink — meaning that the cost of the food was figured into the cost of the drink.

The incomprehensibility and unexplainability of huge algorithms

Michael Egnor: What terrifies me about artificial intelligence — and I don’t think one can overstate this danger — is that artificial intelligence has two properties that make it particularly deadly in human civilization. One is concealment. Even though every single purpose in artificial intelligence is human, it’s concealed. We don’t really understand it. We don’t understand Google’s algorithms.

There may even be a situation where Google doesn’t understand Google’s algorithms. But all of it comes from the people who run Google. So the concealment is very dangerous. We don’t know what these programs are doing to our culture. And it may be that no one knows, but they are doing things.

Note: Roman Yampolskiy has written about the incomprehensibility and unexplainability of AI: “Human beings are finite in our abilities. For example, our short term memory is about 7 units on average. In contrast, an AI can remember billions of items and AI capacity to do so is growing exponentially. While never infinite in a true mathematical sense, machine capabilities can be considered such in comparison with ours. This is true for memory, compute speed, and communication abilities.” So we have built-in bias and incomprehensibility at the same time.

The dangers of social contagion = the Twitter mob

Michael Egnor: And the second problem, which René Girard (1923–2015) , a French philosopher, wrote about extensively, is the concept of mimetic contagion… we are imitating animals, and no other animal imitates anywhere near the way we do. And we imitate particularly other humans desires’. Advertisers notice that if they show a popular quarterback drinking a certain brand of soda, other people will want to go out and buy that same soda. But that’s kind of an odd thing. Why would we imitate what that guy wants?

Robert J. Marks: Because the guy looks happy, and we want to be happy.

Michael Egnor: Right. And Gerard developed this remarkable system of sociology and anthropology, based on this idea that humans are inveterate imitators, and they imitate desires. And he said that one of the most dangerous things that happens in human culture is mimetic contagion, a contagion of imitation…

I can imitate a guy in China, at exactly the same moment that everybody else in the world imitates the same guy. And it takes zero seconds to do it. And that’s never happened before. Humanity has never had that kind of interconnectedness. And that mimetic contagion, according to Gerard is lethal to mankind.

Note: Egnor unpacks this theme in more detail in “How the internet turns coffee klatsches into mobs: “On the internet, you can personally attack someone without ever seeing them, knowing them, or being anywhere near them. You can attack people in a way that leads to violence against them without your own identity ever coming to light. The anonymity of the internet and the distance it creates between an attacker and his victim both lend an obscurity to the attack that is much more dangerous to the victim and much more desirable for the attacker. It is even possible to harm others unintentionally through the spread of errors and misunderstandings which are so common to internet communication.”

Michael Egnor: And it can happen like kerosene with a match. It can happen at incredible velocity and incredible ferocity. And these are incredibly dangerous things that we’re dealing with it. Frankly, I think that some of our political crisis in this country right now is because of that. It’s because of the bias inherent in our information and the enormous potential for imitation for mimetic contagion.

Robert J. Marks: I just watched a Netflix documentary called the The Social Dilemma, , which talked about the impact of social media and Google and all of the data mining that is done by these big networks that corresponds to the concealed information that you talked about. And you’re right, it’s chilling. One of the things that they mentioned is that there’s only two industries that refer to their customers as users. And that is social media and drug dealing.

The impact made me want to quit social media altogether. But I tell you, it’s addicting. So one has to do partial withdrawal. Maybe I need to go into a 12 step program or something.

Michael Egnor: And the problem is that they know it’s addicting. And I think probably one of the reasons that it’s addicting is that they’ve made it addictive.

And we don’t even understand it. And frankly, they may not even fully understand it. That is, it’s incredibly dangerous stuff. It also has potential for good. But wow, the danger that we’re facing is, I don’t think we comprehend what this means…

Machines that can think for us?

Michael Egnor: What concerns me a great deal is, first of all, the widespread belief among people who engineer AI, that AI has the potential to become conscious or to have its own intentions.

I mean, nobody in their right mind actually thinks that a machine can think. The belief that a machine can think is along the lines of thinking that your television set is trying to communicate with you. The people who made the television program are communicating with you through the television set. But the television set isn’t trying to do anything. It’s just a piece of metal.

And these AI engineers are smart enough to know that. But they don’t seem to. And two things scare me. Number one, that the people who are designing AI aren’t smart enough to figure that out. And number two, that maybe they have figured that out, and they’re using it in ways that they’re not being honest about. And both of those concepts are terrifying.

Robert J. Marks

Robert J. Marks: Yes, I do think that some of these testimonies about control of the masses before Congress are going to historically be revealed to be similar to the testimony of tobacco executives about the effects of cigarettes. They know what they’re doing, and it’s going to come out somewhere.

Michael Egnor: Right. And I think the primary motives have been to monetize it. Obviously, they want to make money. And frankly, I think that will always be the motive. I think they’re just trying to be trillionaires instead of just billionaires. But the thing is that there are certain cultural and social structures that can be built that make it more lucrative. And that’s very concerning.

Note: Billionaire investor Peter Thiel is not buying into this widespread belief. He told COSM 2021 last week, Artificial General Intelligence isn’t happening. What is happening is a huge increase in surveillance — control by AI companies who continually monitor us, not by “thinking machines.”

Next: Can information result, without intention, from wholly random processes? They tried it with computers…

Here are all the episodes in the series. Browse and enjoy:

  1. How information becomes everything, including life. Without the information that holds us together, we would just be dust floating around the room. As computer engineer Robert J. Marks explains, our DNA is fundamentally digital, not analog, in how it keeps us being what we are.
  2. Does creativity just mean Bigger Data? Or something else? Michael Egnor and Robert J. Marks look at claims that artificial intelligence can somehow be taught to be creative. The problem with getting AI to understand causation, as opposed to correlation, has led to many spurious correlations in data driven papers.
  3. Does Mt Rushmore contain no more information than Mt Fuji? That is, does intelligent intervention increase information? Is that intervention detectable by science methods? With 2 DVDs of the same storage capacity — one random noise and the other a film (BraveHeart, for example), how do we detect a difference?
  4. How do we know Lincoln contained more information than his bust? Life forms strive to be more of what they are. Grains of sand don’t. You need more information to strive than to just exist. Even bacteria, not intelligent in the sense we usually think of, strive. Grains of sand, the same size as bacteria, don’t. Life entails much more information.
  5. Why AI can’t really filter out “hate news.” As Robert J. Marks explains, the No Free Lunch theorem establishes that computer programs without bias are like ice cubes without cold. Marks and Egnor review worrying developments from large data harvesting algorithms — unexplainable, unknowable, and unaccountable — with underestimated risks.
  6. Can wholly random processes produce information? Can information result, without intention, from a series of accidents? Some have tried it with computers… Dr. Marks: We could measure in bits the amount of information that the programmer put into a computer program to get a (random) search process to succeed.
  7. How even random numbers show evidence of design Random number generators are actually pseudo-random number generators because they depend on designed algorithms. The only true randomness, Robert J. Marks explains, is quantum collapse. Claims for randomness in, say, evolution don’t withstand information theory scrutiny.

Show Notes

  • 00:00:09 | Introducing Dr. Robert J. Marks
  • 00:01:02 | What is information?
  • 00:06:42 | Exact representations of data
  • 00:08:22 | A system with minimal information
  • 00:09:31 | Information in nature
  • 00:10:46 | Comparing biological information and information in non-living things
  • 00:11:32 | Creation of information
  • 00:12:53 | Will artificial intelligence ever be creative?
  • 00:17:40 | Correlation vs. causation
  • 00:24:22 | Mount Rushmore vs. Mount Fuji
  • 00:26:32 | Specified complexity
  • 00:29:49 | How does a statue of Abraham Lincoln differ from Abraham Lincoln himself?
  • 00:37:21 Achieving goals
  • 00:38:26 | Robots improving themselves
  • 00:43:13 | Bias and concealment in artificial intelligence
  • 00:44:42 | Mimetic contagion
  • 00:50:14 | Dangers of artificial intelligence
  • 00:54:01| The role of information in AI evolutionary computing
  • 01:00:15| The Dead Man Syndrome
  • 01:02:46 | Randomness requires information and intelligence
  • 01:08:58 | Scientific critics of Intelligent Design
  • 01:09:40 | The controversy between Darwinian theory and ID theory
  • 01:15:07 | The Anthropic Principle

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why AI Can’t Really Filter Out “Hate News”