Mind Matters Natural and Artificial Intelligence News and Analysis
Drawing gears
Image licensed via Adobe Stock

We went back to visit Gödel, Escher, and Bach…

Forty years after publication, how has a big explain-the-mind book withstood the test of time?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In August 2019, it will be forty years since Indiana University cognitive scientist Douglas Hofstadter published his Pulitzer Prize-winning Gödel, Escher, Bach: An Eternal Golden Braid (hereafter GEB). Its stories and dialogues about logic, mathematical puzzles, musical improvisations, and computational techniques play off the genius of mathematician Kurt Gödel, artist M.C. Escher, and composer Johann Sebastian Bach. In the early days of modern artificial intelligence (AI), his provocative work aimed to demonstrate a beautiful symmetry and connectedness across mathematics, art, and music. Many walked away from the book feeling he had succeeded while missing his true purpose. As a wide-eyed college student in 1984, I was one of them.

The second time I read it, in 2013, I was a trained philosopher with a less naïve outlook. I was reading the 20th-anniversary edition (1999), in whose preface Hofstadter hastens to clarify his original purpose: The three luminaries are not the central figures of the book. The book was intended to ask the fundamental question of how the animate can emerge from the inanimate, or more specifically, how does consciousness arise from inanimate, physical material? As philosopher and cognitivist scientist David Chalmers has eloquently asked, “How does the water of the brain turn into the wine of consciousness?”

Hofstadter believes he has the answer: the conscious “self” of the human mind emerges from a system of specific, hierarchical patterns of sufficient complexity within the physical substrate of the brain. The self is a phenomenon that rides on top of this complexity to a large degree but is not entirely determined by its underlying physical layers.

In the 1999 preface, he notes an apparent contradiction. When we look at computers, we see inflexible, unintelligent, rule-following beasts with no internal desires, which he describes as “the epitome of unconsciousness.” Is it a contradiction that intelligent behavior can be programmed into unintelligent machines? Is there an “unbreachable gulf” between intelligence and non-intelligence?

Hofstadter believes that through large sets of formal rules and levels of rules generated by AI, we can finally program these inflexible computers to be flexible, thinking machines. If so, we were wrong in thinking that there is a marked difference between human minds and intelligent machines. The rules and levels of rules that govern the behavior of thinking machines may apply similarly to human minds, resolving any apparent contradiction. So how does his model work?

Hofstadter cites advances in AI, particularly advances in language, as the key to intelligent programs. Intelligent programs consist of a series of levels of hardware and software, as seen in the diagram below (slightly modified from GEB for clarity). The highest level involves symbol processing, which Hofstadter sees as analogous between AI neural networks and human neural brain states.

The higher levels need not know about what is happening in the lower levels but each does need to interface with the level immediately below. In the same way, our minds interact with the billions of neurons in our brains but we need not know how they work. The full potential, however, is at the machine level, which provides constraints at the higher software levels.

The two following diagrams demonstrate more concretely the analogs that Hofstadter suggests. In the first diagram below, Hofstadter proposes that the symbol level of the brain floats on the lower level neural activity of the brain, which allows it to mirror the world. The neural activity of the computer can simulate these lower-level brain states, but it is not “thinking” in the same sense as the higher-level brain thinks. So, the computer model of a neural network is isomorphic (effectively, identical in form or structure) to the brain’s substrate of neurons.

Clearly, there is a gap. Twenty years after his original publication, there is still no AI that comes remotely close to matching the human brain.

In the next diagram, Hofstadter further identifies the isomorphic “gap,” based on the state of AI research as it was in 1999. He hoped that, through brain research, the symbolic levels of the brain could be “skimmed off” the neural substrate and implemented on a computer. But that is where he is stuck; he has offered no novel ideas on how, specifically, to do that.

Hofstadter admits that AI has very far to go to match the brain, but he sees no reason to believe it will not eventually express the full range of emotions that humans do, which would include writing beautiful music. Humans, in his view, are rule-governed just like any computer program. He argues it is merely an illusion that we believe we are not rule-governed. In his own words:

A reductionistic explanation of the mind, in order to be comprehensible, must bring in ‘soft’ concepts such as levels, mappings, and meanings. In principle, I have no doubt that a totally reductionist but incomprehensible explanation of the brain exists; the problem is how to translate it into a language we ourselves can fathom.

Douglas Hofstadter, “Gödel, Escher, Bach,” p. 709

Hofstadter is undoubtedly a physicalist (everything that really exists is physical). He also appears to be an “emergentist” (he thinks that mental causes can occur but their origin is physical).

So what has happened with AI in the twenty years since 1999 that might bear on his thesis?

He sees language advances in AI as the key to replicating human intelligence. So we can look to a recent observation by MIT physicist and cosmologist Max Tegmark’s 2017 book, Life 3.0: Being Human in the Age of Artificial Intelligence:,

Natural language processing is now one of the most rapidly advancing fields of AI, and I think that further success will have a large impact because language is so central to being human. The better an AI gets at linguistic prediction, the better it can compose reasonable email responses or continue a spoken conversation. This might, at least to an outsider, give the appearance of thought taking place.

It sounds promising but he goes on to say,

Language-processing AI still has a long way to go, though. Although I must confess that I feel a bit deflated when I’m out-translated by an AI, I feel better once I remind myself that, so far, it doesn’t understand what it’s saying in any meaningful sense. From being trained on massive data sets, it discovers patterns and relations involving words without ever relating these words to anything in the real world.

In other words, AI language-processing neural networks have no understanding of context or meaning with respect to the symbols they manipulate, so it’s hard to say if we are any closer to an AI behaving with human-like intelligence. While Tegmark can’t rule it out, he doubts that we will achieve what he considers human-level artificial general intelligence (AGI) any time in the foreseeable future (i.e., the ability for an AI to generally accomplish any task that humans can at least as well as humans). And AGI is what Hofstadter sees as the Holy Grail of AI.

There have been no indications that AI programs will ever break free of these constraints, and further, no demonstration that humans are rule-governed in the way that Hofstadter posits. Indeed, the view that human consciousness is something unique is the most tenable philosophical position unless we learn definitively otherwise.

There is, quite simply, no mechanical explanation of how the human mind has emerged from brawling chimpanzees over the course of millions of years of evolution.


Further reading on theories of the human mind:

Panpsychism: You are conscious but so is your coffee mug Materialists have a solution to the problem of consciousness, and it may startle you

How can consciousness be a material thing? Maybe it can’t. But materialist philosophers face starkly limited choices in how to view consciousness

and

Four researchers whose work sheds light on the reality of the mind. The brain can be cut in half, but the intellect and will cannot, says Michael Egnor. The intellect and will are metaphysically simple.

Featured image: Drawing gears/Olly, Adobe Stock


Walter Myers III

Board of Directors, Discovery Institute
Walter is a Principal Engineering Manager leading a team of engineers, working with customers to drive their success in the Microsoft Azure Cloud. He holds a Master’s Degree in Philosophy from Biola University's Talbot School of Theology, where he is an adjunct faculty member in the Master of Arts in Science & Religion (MASR) program teaching on Darwinian evolution from a design-centric perspective. He is also a board member of the Orange County Classical Academy (OCCA), a classical charter school in Southern California associated with Hillsdale College.

We went back to visit Gödel, Escher, and Bach…