The connectome is the theoretical complete description of the structural connectivity, that is, the physical wiring of a life form’s nervous system. Pop neuroscience enables us to talk brashly about such things.
In 2010, computational neuroscientist Sebastian Seung informed us, “I am my connectome,” a thought revisited in his 2012 book, Connectome: How the Brain’s Wiring Makes Us Who We Are—another instalment, we are told, in the “The bold and thrilling quest to finally understand the brain.”
He wasn’t alone, In 2012, National Institutes of Health director Francis Collins was saying the same sort of thing: “Ever wonder what is it that makes you, you? Depending on whom you ask, there are a lot of different answers, but these days some of the world’s top neuroscientists might say: ‘You are your connectome.’”
Connectomics, as the field is called, is reductionist but not conventionally reductionist. A neuroscience Phd candidate shared recently,
A complete human connectome will be a monumental technical achievement. A complete wiring diagram for a mouse brain alone would take up two exabytes. That’s 2 billion gigabytes; by comparison, estimates of the data footprint of all books ever written come out to less than 100 terabytes, or 0.005 percent of a mouse brain.Grigori Guitchounts, “An Existential Crisis in Neuroscience” at Nautilus
So connectomics is reductionist in the sense that it seeks to reduce human existence to a wiring diagram—but it makes no pretence that the diagram would be easy to understand. In fact, that’s precisely what haunts Grigori Guitchounts, as he seeks to hang on to materialism. He asks a senior neuroscientist, Harvard’s Jeff Lichtman (who is trying to map the brain), for some thoughts:
Lichtman’s lab happens to be down the hall from mine, so on a recent afternoon, I meandered over to his office to ask him about the nascent field of connectomics and whether he thinks we’ll ever have a holistic understanding of the brain. His answer—“No”—was not reassuring, but our conversation was a revelation, and shed light on the questions that had been haunting me. How do I make sense of gargantuan volumes of data? Where does science end and personal interpretation begin? Were humans even capable of weaving today’s reams of information into a holistic picture? I was now on a dark path, questioning the limits of human understanding, unsettled by a future filled with big data and small comprehension.Grigori Guitchounts, “An Existential Crisis in Neuroscience” at Nautilus
Now, where does New York City come in?
“I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”Grigori Guitchounts, “An Existential Crisis in Neuroscience” at Nautilus
Language, Lichtman argues, is not the correct tool for the kind of understanding required. Oddly, Guitchounts did come across a tool of sorts, a short story by Jorge Louis Borges (1899–1986). In the story, cartographers, seeking excellence, publish a map of an empire of such detailed accuracy and complexity that it is as big as the empire itself and entirely useless, producing an awesome ruin.
Guitchounts, more optimistic than Borges, hopes that Deep Learning will come to the rescue:
It seems likely that Lichtman’s two exabytes of brain slices, and even my 48 terabytes of rat brain data, will not fit through any individual human mind. Or at least no human mind is going to orchestrate all this data into a panoramic picture of how the human brain works. As I sat at my office desk, watching the setting sun tint the cloudless sky a light crimson, my mind reached a chromatic, if mechanical, future. The machines we have built—the ones architected after cortical anatomy—fall short of capturing the nature of the human brain. But they have no trouble finding patterns in large datasets. Maybe one day, as they grow stronger building on more cortical anatomy, they will be able to explain those patterns back to us, solving the puzzle of the brain’s interconnections, creating a picture we understand. Out my window, the sparrows were chirping excitedly, not ready to call it a day.Grigori Guitchounts, “An Existential Crisis in Neuroscience” at Nautilus
But, as Roman Yampolskiy warns, Deep Learning, chugging through vast seas of data, may frustrate Guitchounts’s ambitions, presenting us with findings whose origin we cannot understand, leading to doubt about their veracity:
For example, explanations may be too long to be surveyed (Unsurveyability), Unverifiable, or too complex to be understood, making the explanation incomprehensible to the user. Any AI, including black box neural networks can in principle be converted to a large decision tree of nothing but “if” statements, but that will only make it human-readable not human-understandable.Roman Yampolskiy, “Unexplainability and incomprehensibility” at Mind Matters News
But how much would that final incomprehensibility really change? Guitchounts also tells us,
Neuroscientists have had the complete wiring diagram of the worm C. elegans for a few decades now, but arguably do not understand the 300-neuron creature in its entirety; how its brain connections relate to its behaviors is still an active area of research.Grigori Guitchounts, “An Existential Crisis in Neuroscience” at Nautilus
Really? We do not know how the brain connections of even a nematode worm relate to its behavior?
Very well then, let’s go back briefly to New York City. It may not be entirely correct to say that no one really understands the city. In their different ways, some artists, songwriters, and novelists do understand it, in the sense that they can portray it convincingly. Some public figures understand it in the sense that they can connect with the people about local issues.
If the brain is as incomprehensible as Guitchounts—against his own wishes—implies, perhaps that level of understanding, connection we can understand, will continue to be our best road map for the foreseeable future.
Further reading on understanding the brain (or not):
We will never “solve” the brain. A science historian offers a look at some of the difficulties we face in understanding the brain. In a forthcoming book, science historian Matthew Cobb suggests that we may need to be content with different explanations for different brain parts. And that the image of the brain as a computer is definitely on the way out.
Unexplainability and incomprehensibility of AI: In the domain of AI safety, the more accurate the explanation is, the less comprehensible it is. (Roman Yampolskiy)