Mind Matters Natural and Artificial Intelligence News and Analysis
2-bro-s-media-uOo14fBzhtE-unsplash
Crystal ball with building from antiquity
Photo by 2 Bros Media at Unsplash

Futurism Doesn’t Learn from Past Experience

Technological success stories cannot be extrapolated into an indefinite future
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Futurist predictions based on the vast promise of artificial intelligence (AI) are riding the wave of scientific optimism in general. Scientific progress is undeniable, hence the easy faith.

Whatever the limits of “AI,” if we believe in the idea of progress, we might invoke it in response to skepticism. In short, the curve trends upward, and one day it will reach human intelligence. That is the March of Science—or so the story runs.

To believe, simply commit to the idea that science solves problems, AI is a scientific problem, and so one day we will witness in AI what today is still futuristic: There are no limits; human-level intelligence is reached and then, by a similar progressive logic, exceeded by a superintelligence.)

Much of the discussion about AI by dyed-in-the-wool futurists like Ray Kurzweil, piggybacks on this view of science which, at its extremes, is really a worldview: scientism. Scientism is just the belief that all problems (at least those that can be expressed in language) are solvable by science. Scientism assures nervous empiricists that religion and even philosophy are but temporary pit-stops on the road to a full scientific account of ourselves and the world around us.

The Enlightenment introduced the psychology of scientism, even as no one knows if it’s true. In fact, we have at least one good reason to believe it’s false: Consciousness is not illuminated theoretically (or otherwise) by descriptions of “functions and structures” in the brain, as New York University philosopher David Chalmers once put it. “Functions and structures” is the business of empirical science, so Charmers is saying that science can’t explain consciousness. Ever.

No one ever refuted Chalmers or others of his ilk. Materialists like Daniel Dennett simply declared “consciousness” to be an illusion. (That pain you feel isn’t really a pain. Come to think of it, you don’t “feel” either. Depressing.) Scientists who are not under the spell of scientism don’t really believe this (maybe Dennett doesn’t either, who knows?). Consciousness to many scientists is, as for the rest of us, an ongoing mystery. Science may one day shed more light and demystify it but in the meantime, it’s still politically correct to plead ignorance on such thorny philosophical conundrums.

Scientism is dangerous to the extent that believing that something—anything—is inevitable enables us to stop worrying about how it will come to pass. By what mechanism? Theory? Scientism as a worldview is also itself unscientific, to the extent that “meta” knowledge about the powers of science can only be based on a clear-minded assessment of all its accomplishments as well as its failures. But even if we throw out capital “B” Belief in scientism, it remains true that scientific practice tends toward progress, in the same way as working on character defects tends to fix them, even if slowly and partially (and painfully).

Scientific progress is, in this sense, a truism. We can quibble about unintended consequences like nuclear war or pollution, but it’s difficult to say that we know less after that phenomenon of Western history, the Enlightenment, redrew the lines of inquiry for everyone. Galileo (then Newton, then Einstein) figured out how gravity really works, not Aristotle. Carbon dating works better than the Bhagavad Gita for figuring out how old something is. It’s nice, also, to have antibiotics.

But “scientific progress,” while real in this basic sense, is also double-sided. If scientism is not universally true, it doesn’t follow that science doesn’t tend toward truth; doesn’t reveal true theories (ignore Kuhn, I suppose). True. But our situation is double-sided because science often shows us what can’t be done, too. Science reveals limits, dead-ends, which fortunately tend to make future science more fruitful and productive. More “true.” As with moral pursuits and self-knowledge, discovering limitations can be liberating. When Maxwell, Carnot, and others discovered true theories about work and entropy, all the ballyhoo about perpetual motion machines disappeared, like whale oil for lamps after petroleum and electricity.

AI futurists and enthusiasts like Ray Kurzweil, entrepreneur-extraordinaire Elon Musk, or even Bill Gates and the late astrophysicist Stephen Hawking tend, strangely, to fall back on a profoundly conservative yardstick for measuring scientific progress when they blitzkrieg public discourse with speculation about machines and our future. It goes like this: whatever technological success stories we have today thanks to scientific discovery, we can expect to all be perfected eventually. The premise in this oddly ignorant argument is simply that science makes for progress, and so whatever we see around us that science gave birth to (like digital computers) will progress to endpoints that we can only dream about today.

The link between science and technology here is important. Technoscience, as it’s usually called, is the observation that science and technology are in symbiosis—and they are. Science (actually mathematical logic, actually Alan Turing, as his Turing Machine model was somewhat sui generis) gave us a theory of computable functions, or computation, for short. We just add a scientific, mathematical, and engineering virtuoso like Hungarian-born John von Neumann (1903–1957) and we have a working machine, implementing an abstract theory—computation. Ergo, a digital computer. Once we have this piece of technology, we can use it to test and evaluate other scientific theories. Von Neumann, ever resourceful, used computational devices to calculate blast ratios for fission reactions. (This, unfortunately, led to a new technology: the nuclear bomb.)

The problem with technoscience is that scientific discovery is typically more difficult, chancy, and unpredictable than making incremental improvements on the extant technologies that resulted from it. The psychology of the mismatch means that even good scientists will get bored or overconfident, waiting around for discoveries, when they could just transfer all that Promethean zeal to improving the tech. As John Horgan, author of The End of Science, wryly noted back in the 1990s, the trajectory of a maturing technoscience seems to be away from core discoveries, like relativity or quantum mechanics, and toward more and more engineering projects. This kicks Einstein out in favor of technicians (or computer programmers). Americans, in particular, are inveterate builders and tinkerers so it is perhaps inevitable that the modern scientific culture refuses to wait around for discoveries. It’s just easier and happier to put the Promethean energy into improving the technology.

The technoscience attitude is smart provided nothing stands in its way, like the need for a new discovery. AI could show up, predictably, on the heels of systematic improvements in algorithms and hardware. Maybe. Or maybe it’s another perpetual motion machine. The programmers can’t really say. Mathematician David Hilbert (1862–1943) challenged mathematicians at the turn of the twentieth century with a list of unsolved problems. All could be solved, he thought, if only everyone kept working. It never occurred to Hilbert that some problems might lack solutions and that further work by brilliant mathematicians would “solve” some problems on his list by proving that solutions were in fact impossible. This seems never to occur to certain scientific personalities in the grip of, if not full-blown scientism, at least a simplistic faith in the onward-and-upward, whose starting point is whatever is at hand. Cars will simply get faster until they fly, if no one discovers aeronautics and builds airplanes.

The mathematical logician Kurt Gödel (1906–1978) took up Hilbert’s challenge. But he blew a giant hole in it. He proved that the Entschiedungsproblem—the problem of proving that all of arithmetic is consistent and complete—was impossible. Gödel published the Incompleteness Theorems in 1931 (David Hilbert was still around).

Alan Turing later proved that Gödel’s incompleteness bedeviled the new field of computation, too. All sorts of hopes and dreams were bashed: universal bug checkers, for instance, are impossible. And consistency checks on what’s known as first-order logic are pointless. No one lost much sleep over the negative result (except, famously, Wittgenstein), because it wasn’t really negative. Gödel’s and later Turing’s (and others’) work on the limits of formal systems helped sharpen and clarify our understanding of such systems. Someone like Kurzweil or Musk might have trumpeted Hilbert’s claim as they do current ideas about AI, on the simple ground that science solves problems. Well, yes, it does. It solved, for instance, the problem of determining the scope and limits of computation.

The more we think about the unfair exclusion of “negative” results in the annals of science, the more silly and myopic Kurzweil’s logic becomes. If science someday proved that computer systems could never reproduce some aspect of mind, we’d have learned something important about the nature of mind. It might spur new ideas and theories in, say, neuroscience, cognitive psychology, or cognate fields. “Ahh, so it’s not reducible to a computer, ever,” might be the beginning of new science in a new direction.

Werner Heisenberg, a physicist, used the new quantum physics equations to prove that quantum phenomena had limitations. His Uncertainty Principle is limiting—we can’t measure simultaneously the position and momentum of a subatomic particle (because observing it throws it off course). How dare he! But the knowledge is a cornerstone of quantum theory. The science progressed.

Nuclear fusion is also a good case study. We have all the necessary knowledge, more or less, to understand how to build a fusion reactor. The problem is that we need extreme temperatures. Okay, then, cold fusion. The problem is that cold fusion, as a loophole, seems increasingly chimerical. There’s not a lot of money in cold fusion research anymore but no one seems to be yelling at the simple-minded “cold fusion skeptics,” as they might yell at someone beginning to glimpse deeper problems with equating minds and machines. If this idea—AI (or AGI)—turns out to be like Incompleteness or Uncertainty or cold fusion reactors pumping cheap energy around the planet, the Hilberts of today will be disappointed, no doubt. But the true scientists will do what they always do: shrug, and feel grateful that something was learned. Science marches on.


Also by Analysis: The mind can’ t be just a computer Gödel demonstrated that fact and Turing tried to live with it

Further reading: Things exist that are unknowable (Robert J. Marks)

and

Human intelligence as a halting oracle (Eric Holloway)


Mind Matters Analysis

Futurism Doesn’t Learn from Past Experience