Mind Matters Natural and Artificial Intelligence News and Analysis
analytics-data-big-business-intelligence-background-bi-stockpack-adobe-stock
Analytics data big business intelligence background bi
Analytics data big business intelligence background bi

Why Big Data Can Be the Enemy of New Ideas

Copernicus could tell us how that works: Masses of documentation entrench the old ideas
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In “Hyping Artificial Intelligence Hinders Innovation” (podcast episode 163), Andrew McDiarmid interviewed Erik J. Larson, programmer and author of The Myth of Artificial Intelligence (Harvard University Press, 2021) on the wrong path in terms of what machines can and can’t do. Now they look at the critical fact that Big Data can easily be the enemy of new ideas.

This portion begins at roughly 30:00 min. A partial transcript and notes, Show Notes, and Additional Resources follow.

Andrew McDiarmid: Are there lessons about the ethics of innovation from the past that would be useful to us today? Can you think of anything they learned about innovation in the past that we could really learn from as we’re innovating today?

Erik Larson: You could go all the way back to some of the core discoveries in science. [They] were definitely not big data driven discoveries.

One of my favorite examples is Copernicus who, of course, gave us the heliocentric [sun-centered] view of the solar system. There was a theory that had persisted for 800, 900 years called the Ptolemaic model where the Earth was at the center of the cosmos. Copernicus obviously flipped that… and the original model that he constructed, this sun-centered model, was actually not as predictive!

It solved a couple of difficult problems in astronomy but they had been accumulating data to patch up this problem for hundreds of years. So they had, effectively, a pretty accurate view of the movements of celestial objects. So when Copernicus actually proposed his theory, it went against all of this massive accumulation of data. It was actually a human mind insight that broke from this kind of Bell curve inductive thinking.

In the book I think I say… all the data was fit to the wrong curve. How could AI have helped? It would have actually just further entrenched that conclusion and tried to optimize the geocentric model.

It took a person, an innovator, and it was in spite of, not because of, all the data. So, in the history of innovation and scientific discovery, you see these moments where people have a shift in thinking that really can’t be accounted for by any kind of mechanistic analysis of what was going on before. …

It was Norbert Wiener (1894-1964) who said, “If we don’t invest in human intelligence in our society, we’re unlikely to have a lot of it.” So one lesson of innovation is to take Wiener’s point seriously: we need to invest in people…

Andrew McDiarmid: Well the more tech we have around us, the more we start to kind of get the feeling that tech is the answer to everything. We get that reinforced by big tech and their big yearly or even semi-yearly announcements of new products and updates and if we’re not careful, we’d just think tech is the answer to everything. But you have said that there are some things in society, some problems that we have, that are just fundamentally non-technical. Can you embellish on that a little bit? Why isn’t everything solvable with tech do you think? Is that just because of the fundamental difference between machine and mind?

Erik Larson: I could give the case of neuroscience research. There was something like a billion Euros invested in trying to understand how the brain works … The goal of that project was to reproduce the human brain on a supercomputer. The idea was that if we can just get to a sufficient level of granularity and neurons and systems of neurons and so on, that we can actually code those connections in an artificial neural network, we can just build an actual human brain.

Note: See Why Did the Human Brain Project Crash and Burn?: “The human brain exceeds the most powerful computers in efficiency. It’s also not clear exactly how it works. Lemurs, with brains 1/200th the size of a chimpanzee’s brain, passed the same IQ test. And this is to say nothing of the little understood relationship between the human brain and the human mind… Underlying the quarrels and stalemates of the Human Brain Project may be practical problems with the idea of simply simulating the brain on a computer.”

Erik Larson: Of course it was a total failure… The guy who started it actually ended up getting fired for a variety of reasons but tech didn’t solve that problem in science because focusing on technology rather than the actual natural world turns out to have not been a good idea. It’s almost like inserting an artificial layer. Trying to convert basic research in neuroscience into a software development project just means you’re going to end up with software ideas and ideas that are programmable on a computer. Your scientists are going to be working with existing theories because those are the ones you can actually write and code. And they’re not going to be looking for gaps in our existing theoretical knowledge in the brain.

Erik Larson

The introduction of technology as the kind of driving force for success in that project was a really terrible, terrible idea. I remember thinking that’s never going to work. So there’s just 10 billion Euros or something wasted. The United States had a similar project under former president Obama which has been a little bit more congenial to actual human research and so it’s met with a little bit more success. But the point is that the idea that you can replace supercomputers with human thinking and human science and human insight and the hard work of scientific investigation and discovery is just a really bad idea. I mean I’m tempted to say it’s kind of a stupid idea, frankly. Why would anybody believe that that’s going to work?

Computation in general — and engineering in general — is a downstream kind of idea. It’s not a direct connection to nature and to the world around us. So when you’re trying to take these downstream ideas and make them central to investigation, you’re never going to get to the ground truth. You’re never going to grab the root of the problem…

But a certain amount of that just greases the wheels, the modern wheels, right? A certain amount of that is kind of how we keep however many billion people on the planet connected and a lot of that is necessary to drive business and other aspects of the modern world. But there always is going to be this tension.

Andrew McDiarmid: Well, in Part Three of your book, you finish by warning us of the consequences of carrying the myth of AI into the future. What will happen if we don’t get on the right path with AI? And what will happen if we do in your opinion?

Erik Larson: I ended the book with a question from the investor Peter Thiel. He was asking, “Is innovation dried up?” In other words, did we pick all the low hanging fruit and that’s why we see a stagnation in innovation today? By the way, I agree with his assessment of the world circa 2021, you don’t see a lot of innovation in AI as a field and just in general, right? AI itself has been in the same mode, I would argue for 20 years but deep learning was roughly 2012. So we’re just about a decade into this same way of thinking in AI and there’s just nothing new coming out of AI science anymore. We don’t see a lot of new fantastically interesting things coming out of culture, we just see Twitter fights. It’s the same stuff, there’s nothing new happening.

So did we pick all the low hanging fruit from the scientific discoveries of the last century on up through the advent of the web, or do we have some kind of perversion of culture itself so that we can’t find new ideas?… Like I actually am working in the field of AI, I actually work for an AI company, my title is research scientist and I actually want to continue to find innovative ways of doing AI. Since we’re not getting rid of it, we might as well try to make it work better…

Andrew McDiarmid: Or we need to regroup to figure out what to do with it… you can’t reduce humans to a bunch of data points either. That just undermines creativity and originality, and also spontaneity. So I think we’ll be wrestling with this for a while.

Sometimes you’ll hear folks say, “Oh, I stumbled on this new TV show and I just love it.” … We’re not really stumbling on things anymore because it’s being brought to us by algorithms we don’t really see and don’t really sense. I want to still be able to say that I accidentally found you or I just so happened to be in the area. Well, it’s just that constant struggle to keep humanity going without technology taking over almost.

Erik Larson: That’s a really good point. I would say especially with the question of Big Tech. There’s something nefarious going on beyond just the philosophical idea of treating people like data points which certainly is the underlying worldview. Like that’s what’s happening, that’s the view of the person is a bunch of trackable data points in a kind of Cartesian coordinate system.

That’s the idea, but one of the big threats to that kind of life experience is the tech companies are actually not just trying to predict. It’s more effective to predict what you’re going to do next if they can control to some degree what you’re going to do next. There’s a fantastic book out by Shoshana Zuboff called Surveillance Capitalism where she explains that it’s not just that they’re collecting data, they’re actively trying to manipulate your choices and manipulating choices involves reducing them.

So if you’re doing unexpected things, they’re making less ad revenue. It sounds like right out of some kind of sci-fi world scenario or that can’t be possibly be happening. It’s a perverse business model and the connection back to AI incidentally is that if you take away the big data AI, you don’t have the number crunching capacity to track and manipulate two billion people on the internet. You need the AI to crunch the numbers…

Andrew McDiarmid: It’s one reason I don’t use Google anymore to search. Number one, I don’t want to line their pockets with my data points of what I’m searching for, and number two, I don’t want to be influenced in what I find by them.

Next: Slipping free from Big Tech’s noose, simplified


Here is the whole discussion:

  1. How AI changed — in a very big way — around the year 2000 With the advent of huge amounts of data, AI companies switched from using deductive logic to inductive logic. Erik Larson, author of The Myth of Artificial Intelligence (Harvard 2021), explains the immense power using inductive logic on Big Data gave to Big Tech firms.
  2. Did Alan Turing’s change of heart set AI on the wrong path? Erik Larson, author of The Myth of Artificial Intelligence, thinks Turing lost track of one really important way minds differ from machines. Much interaction between humans requires us to understand what is being said and it is not clear, Larson says, how to give AI that capability.
  3. Why Big Data can be the enemy of new ideas. Copernicus could tell us how that works: Masses of documentation entrench the old ideas. Erik Larson, author of The Myth of Artificial Intelligence (2021) notes that, apart from hype, there is not much new coming out of AI any more.
  4. Understanding the de facto Cold War with China High tech is currently a battlefield between freedom and totalitarianism. At a certain point, Andrew McDiarmid thinks, it’s time to just turn it all off. But then, what’s left?

You may also wish to read: Harvard U Press Computer Science author gives AI a reality check. Erik Larson told COSM 2021 about real limits in getting machines that don’t live in the real world to understand it. Computers, he said, have a very hard time understanding many things intuitive to humans and there is no clear programming path to changing that.

Show Notes

  • 00:44 | Introducing Erik Larson
  • 01:59 | What is the AI Landscape?
  • 04:03 | How did Erik become interested in AI?
  • 12:39 | Mind and Machine
  • 16:40 | The Simplified World
  • 20:48 | Different Types of Reasoning and AI
  • 29:53 | Lessons from the Past
  • 34:02 | The Human Brain Project
  • 38:23 | AI in the Future
  • 42:27 | AI and Big Tech
  • 53:58 | Turn it Off
  • 57:41 | Stuck in the Modern World
  • 58:51 | Human Exceptionalism

Additional Resources

Podcast Transcript Download


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why Big Data Can Be the Enemy of New Ideas