In this week’s podcast, “George Gilder on Superintelligent AI,” tech philosopher George Gilder and computer engineer Robert J. Marks, our Walter Bradley Center director, continued their discussion of the impact of artificial intelligence (AI). This time, they focused on whether, in terms of AI, we are in an Indian summer (a warm period just before winter sets in). Or is AI advancing to a superintelligence that eclipses the intellect of humans?
From the transcript: (Show Notes, Resources, and a link to the complete transcript follow.)
Robert J. Marks: Why do you believe that we are on the verge of an Indian Summer in artificial intelligence?
George Gilder (pictured): Well, I just think the dreams that AI is cruising toward a singularity, where it will essentially usurp human minds and then transcend the capabilities of human minds, is delusional. And so today, everybody’s talking about AI. And part of the mystique is the idea that AI… at some point, will allow the machines to design new machines, replicate themselves, and then design ever-better machines that ultimately acquire an intelligence that can be projected off into the universe and can populate the universe with machine mind. And this dream, it’s sort of a religion of the nerds. It’s the materialist superstition, a belief in a flat universe where there’s nothing but material and the laws of chemistry and physics.
This idea that ultimately human beings can retire to beaches on a guaranteed annual income. Well, maybe Brin and Page of Google and the other AI entrepreneurs fly off to nearby planets with Elon Musk in a winner-take-all universe. This is the dream of AI, and it’s all going to come a cropper. AI can’t do any of that stuff. It can do jobs that human beings define. It can perform with tremendous speed and efficiency, but it doesn’t begin to threaten human minds, to usurp human minds. It can amplify and extend human minds and relieve human minds of rote work that is really below human capabilities. But it doesn’t pose any kind of threat to human minds or even jobs for that matter.
Note: Gilder’s term “Indian summer” arises from the earlier concept of an AI winter—a period where there is not much progress in AI. Software engineer Brendan Dixon has pointed out that roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding. This happens because the reality is over hyped, leading to crashes. Dixon provides a timeline as well.
Robert J. Marks: You mentioned superintelligence, which I believe requires creativity. And I think that we both agree that AI and computers can’t be creative. You have to have software that creates better software that creates better software, and that creativity is beyond the capability of artificial intelligence. You call superintelligence “the rapture of the nerds” — one of the quotes that I really enjoyed. You, as I recall, are neighbors with one of these proponents of AI, Ray Kurzweil.
George Gilder: If you look at closely at Ray’s statements, they’re becoming increasingly modest. The Singularity is coming, but the Singularity won’t really displace human beings. We’ll become better. And he understands that the idea of usurping human beings isn’t a very popular vision or a very good business plan for Google where he now works as Chief of Engineering. So I’m just saying that I detect a certain moderation in Ray since he first pronounced the Singularity.
Note: It’s definitely a shift. Famed futurist Kurzweil predicted at the COSM 2019 Technology Summit that we will merge with our computers by 2045 — The Singularity. “Our intelligence will then be a combination of our biological and non-biological intelligence.” We will then be apps of our smart computers. Reasons for doubt include those raised at a COSM 2019 panel at the time: Is Ray Kurzweil’s Singularity nearer or still impossible?
Robert J. Marks (pictured): There’s been, in my perception, a decrease in talking about artificial general intelligence. You mentioned Ray Kurzweil, but I also see that from DeepMind. A few years ago, that was just a really hot topic, but now it’s been diminished. We don’t hear much talk about artificial general intelligence (AGI) anymore.
George Gilder: That’s because AI is application specific, essentially. It can be assigned to specific applications governed by specific symbol systems with specific levels of ergodicity and assumptions that, given input, will always produce the same outputs, determinist expectations. It’s the computer system, and all computers are ultimately application specific.
Discussing the future of computing technology as serving rather than replacing human minds, Gilder went on to talk about the connectome of the human brain (a map of all its connections), compared to the internet:
George Gilder: If you take the whole global internet, until a couple of years ago, to map all the connections in the global internet, it took about a zettabyte, that is 10 to the 21st power.
And how big do you think the connectome of one human mind is? It’s about a zettabyte. In other words, one human brain is about as densely and complexly connected as the entire global internet. But one human brain functions on 12 to 14 watts of energy, while the global internet takes gigawatts of energy, billions of watts.
So ultimately, people just don’t really understand the mind very well. When they talk about mind being a machine, they just don’t understand it. They don’t understand human beings, created in the image of their Creator to be creative and conscious. And all these visions are just absent from the AI model, so that the singularity is achieved, not by a giant advance of technology, but by a delusional diminution of the human mind to a binary machine.
Note: The importance of the connectome is a recent discovery: “For a long time, it was believed that the white matter did not do very much and its signals were generally excluded from brain mapping studies as noise. But that has all changed in recent years. From the little we understand about our hundred-billion-neuron brains, connection is everything. The challenge? The unthinkably large number of connections.”
Note: You may also enjoy: Can AI really evolve into superintelligence all by itself? “At Science earlier this year it was claimed that Darwinian evolution alone can make computers much smarter. As a result, researchers hoped to ‘discover something really fundamental that will take a long time for humans to figure out.’” We caught up with some computer professionals and asked about the probabilities…
Here are the first and second parts of George Gilder‘s podcast discussion with Robert J. Marks’s, including links to the transcripts:
Why is AI a key battleground in philosophy and religion? Tech philosopher George Gilder explains. Spoiler: He thinks humans will win. The belief that AI is superior to human ingenuity, in Gilder’s view, stems from mistaking maps for territory and models for reality.
What if fast computers get in the way of carefully considering information before starting trades? Tech philosopher George Gilder explains why ever bigger computers running the stock market are the road to sudden panics, not to stable prosperity.
- 00:29 | Introducing George Gilder
- 01:00 | An “Indian summer” in AI?
- 03:45 | Superintelligence
- 06:04 | The future of computing technology
- 09:21 | Tony Stretton’s paradox
- 13:51 | An ultra intelligent machine