Mind Matters Natural and Artificial Intelligence News and Analysis
artificial-intelligence-brain-stockpack-adobe-stock.jpg
artificial intelligence brain
artificial intelligence brain

A Neuroscientist on Why We Can Build Human-like Brains

Manuel Brenner, a particle physicist as well as a neuroscientist, thinks pattern recognition is the answer
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Manuel Brenner, a particle physicist who became a theoretical neuroscientist, made the argument last year that human intelligence is less complex than we make it out to be. Thus, building an artificial intelligence might be easier than we suppose. He offers some intriguing arguments and here are some responses:

Is the information we need for building human-like AI in our genes? He doesn’t think so because a tomato has 7000 more genes than a human being. Further, our human genome offers only 25 million bytes of information for our brain’s design but there are 1015 connections in the adult neocortex. His conclusion? “there needs to be a much simpler, more efficient way of defining the blueprint for our brain and for our neocortex.”

3D Rendering of Illuminated Brain and Skull on White

He’s right about the genome, of course. When it comes to genomes, humans and tomatoes are both small stuff. BBC’s ScienceFocus tells us that the Japanese canopy plant has 50 times and the lungfish has 44 times as many genes as a human. So what we need to know about the significance of a human being, as opposed to a tomato or a lungfish, is not encoded in our genes. Then how do we know it is encoded in the neocortex?

➤ Brenner acknowledges the massive neuroplasticity of the brain, that is, its ability to cope flexibly with changes and stresses: “Neuroplasticity indicates that most brain regions can easily take on tasks previously carried out by other brain regions, showing a certain universality behind their design principles.” He thinks that structures and patterns that enable this flexibility can be harnessed to produce artificial intelligence—once we discover them. Citing Ray Kurzweil’s view in How To Create a Mind, that pattern recognition forms the foundation of all thought, Brenner offers,

Take language and writing as an example. Small lines build up patterns that we can recognize as letters. Assortments of letters form words, then sentences. Sentences form paragraphs, whole articles. And in the end, out of an assortment of a small number of minimal patterns arranged in a highly specific way, narrative and meaning emerge.

Manuel Brenner, “Why Intelligence might be simpler than we think” at Towards Data Science (November 3, 2019)

But wait, these patterns only exist because conscious minds recognize them. “Narrative and meaning” don’t emerge from language conventions. They emerge from the mind of the person who uses language conventions to build up “whole articles.” In fact, the use of these language conventions (tools) also emerged from earlier minds. Whether or not pattern recognition is the foundation of all thought, we’d best not confuse the tools with the users.

➤ Brenner, sensing the difficulty, invokes information theory: “A step towards understanding how this architecture could work so well for us can lie in realizing that the brain can be thought of as an information processing device.” The quest then becomes a search for a universal algorithm for learning:

Something akin to this universal algorithm might also be used by the brain, although we are not yet quite sure how the brain learns from an algorithmic perspective. As the most basic example, there’s, of course, Hebbian learning, which has been shown to take place in the brain to some extent. For more sophisticated algorithms, researchers have been trying to find biologically plausible mechanisms for implementing backpropagation in the brain, among many other things.

But it is clear that the brain is very good at learning, and needs to do so in a way that we can in principle understand and very probably model on our computers.

Manuel Brenner, “Why Intelligence might be simpler than we think” at Towards Data Science (November 3, 2019)

The move from genetics to information theory is promising. But we might pause to remember that information is very hard to square with matter and energy, the other two big aspects of our universe. Information is measured in bits and bytes, not in kilograms or joules. It follows different laws too. For example, information is not diminished when shared.

Brenner discusses various possibilities for getting around the problems and then restates his thesis that the human neocortex consists of pattern recognizers:

This is the job of the brain. At its core, it’s an information filtering and ordering device constantly learning useful patterns from data…

Compression and information filtering could well be at the core of what we think of as intelligence, so we might as well learn something from it (as we have been already) when building our own intelligent systems.

Manuel Brenner, “Why Intelligence might be simpler than we think” at Towards Data Science (November 3, 2019)

Well yes, but that’s where we started. We were going to build an artificial intelligence modeled on the human brain. But our project assumes, rather than proves, that the brain even works that way—and that all we need to know in order to produce an artificial intelligence is how the human brain works. Brenner admits as much: “So just stacking up pattern recognizers won’t suddenly bring about robots running around reasoning like humans.”

He then restates his faith that we can build artificial intelligences—without having made clear the basis for it, citing Ray Kurzweil: “(Kurzweil predicts machines passing the Turing test in 2029 and human-level AI in 2045).”

But… finally, in the last two sentences, he does make clear a basis for his faith: “Because after all nature came up with intelligence through the blind fancies of evolution. And it looks like we might come up with it as well soon.

So, to be clear, the basis for Brenner’s confidence is not advances in computer science or neuroscience as such. The basis is that human intelligence originated by accident (“blind fancies”). He is entitled to that opinion but he hasn’t offered evidence for thinking that it is science.


You may also enjoy: Are our minds just an extension of the minds of our cells? A prominent philosopher and a well-known biologist make the case, offering an illustration. Daniel Dennett and Michael Levin ask us to imagine that a model car has arrived and must be assembled according to instructions.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

A Neuroscientist on Why We Can Build Human-like Brains