As we saw yesterday, artificial intelligence (AI) has enjoyed a a string of unbroken successes against humans. But these are successes in games where the map is the territory. Therefore, everything is computable.
That fact hints at the problem tech philosopher and futurist George Gilder raises in Gaming AI (free download here). Whether all human activities can be treated that way successfully is an entirely different question. As Gilder puts it, “AI is a system built on the foundations of computer logic, and when Silicon Valley’s AI theorists push the logic of their case to a “singularity,” they defy the most crucial findings of twentieth-century mathematics and computer science.”
Here is one of the crucial findings they defy (or ignore): Philosopher Charles Sanders Peirce (1839–1914) pointed out that, generally, mental activity comes in threes, not twos (so he called it triadic). For example, you see a row of eggs in a carton and think “12.” You connect the objects (eggs) with a symbol, 12.
In Peirce’s terms, you are the interpretant, the one for whom the symbol 12 means something. But eggs are not 12. 12 is not eggs. Your interpretation is the third factor that makes 12 mean something with respect to the eggs.
Gilder reminds us that, in such a case, “the map is not the territory” (p. 37) Just as 12 is not the eggs, a map of California is not California. To mean anything at all, the map must be read by an interpreter. AI supremacy assumes that the machine’s map can somehow be big enough to stand in for the reality of California and eliminate the need for an interpreter.
The problem, he says, is that the map is not and never can be reality. There is always a gap:
Denying the interpretant does not remove the gap. It remains intractably present. If the inexorable uncertainty, complexity, and information overflows of the gap are not consciously recognized and transcended, the gap fills up with noise. Congesting the gap are surreptitious assumptions, ideology, bias, manipulation, and static. AI triumphalism allows it to sink into a chaos of constantly changing but insidiously tacit interpretations.
Ultimately AI assumes a single interpretant created by machine learning as it processes ever more zettabytes of data and converges on a single interpretation. This interpretation is always of a rearview mirror. Artificial intelligence is based on an unfathomably complex and voluminous look at the past. But this look is always a compound of slightly wrong measurements, thus multiplying its errors through the cosmos. In the real world, by contrast, where interpretation is decentralized among many individual minds—each person interpreting each symbol—mistakes are limited, subject to ongoing checks and balances, rather than being inexorably perpetuated onward.George Gilder, Gaming AI (p. 38)
Does this limitation make a difference in practice? It helps account for the ongoing failure of Big Data to provide consistently meaningful correlations in science, medicine, or economics research. Economics professor Gary Smith puts the problem this way:
Humans naturally assume that all patterns are significant. But AI cannot grasp the meaning of any pattern, significant or not. Thus, from massive number crunches, we may “learn” (if that’s the right word) that
➤ Stock prices can be predicted from Google searches for the word debt.
➤ Stock prices can be predicted from the number of Twitter tweets that use “calm” words.
➤ An unborn baby’s sex can be predicted by the amount of breakfast cereal the mother eats.
➤ Bitcoin prices can be predicted from stock returns in the paperboard-containers-and-boxes industry.
➤ Interest rates can be predicted from Trump tweets containing the words billion and great.News, “Interview: New book outlines the perils of big (meaningless) data” at Mind Matters News
If the significance of those patterns makes no sense to you, it’s not because you are not as smart as the Big Data machine. Those patterns shouldn’t make any sense to you. There’s no sense in them because they are meaningless.
… even random data contain patterns. Thus the patterns that AI algorithms discover may well be meaningless. Our seduction by patterns underlies the publication of nonsense in good peer-reviewed journals.News, “Interview: New book outlines the perils of big (meaningless) data” at Mind Matters News
Yes, such meaningless findings from Big Data do creep into science and medicine journals. That’s partly a function of thinking that a big computer can do our thinking for us even though it can’t recognize the meaning of patterns. It’s what happens when there is no interpreter.
Ah, but—so we are told—quantum computers will evolve so as to save the dream of true thinking machines. Gilder has thought about that one too. In fact, he’s been thinking about it since 1989 when he published Microcosm: The Quantum Era in Economics and Technology.
It’s true that, in the unimaginably tiny quantum world, electrons can do things we can’t:
A long-ago thought experiment of Einstein’s showed that once any two photons—or other quantum entities—interact, they remain in each other’s influence no matter how far they travel across the universe (as long as they do not interact with something else). Schrödinger christened this “entanglement”: The spin—or other quantum attribute—of one behaves as if it reacts to what happens to the other, even when the two are impossibly remote.George Gilder, Gaming AI (p. 40)
But, he says, it’s also true that continuously observing a quantum system will immobilize it (the quantum Zeno effect). As John Wheeler reminded us, we live in a “participatory universe” where the observer (Peirce’s interpretant) is critical. So quantum computers, however cool they sound, still play by rules where the interpreter matters.
In any event, at the quantum scale, we are trying to measure “atoms and electrons using instruments composed of atoms and electrons” (p. 41). That is self-referential and introduces uncertainty into everything: “With quantum computing, you still face the problem of creating an analog machine that does not accumulate errors as it processes its data” (p. 42). Now we are back where we started: Making the picture within the machine much bigger and more detailed will not make it identical to the reality it is supposed to interpret correctly.
And remember, we still have no idea how to make the Ultimate Smart Machine “conscious” because we don’t know what consciousness is. We do know one thing for sure now: If Peirce is right, we could turn most of the known universe into processors and still not produce an interpreter (the consciousness that understands meaning).
Robert J. Marks points out that human creativity is “non-algorithmic” and therefore uncomputable. From which Gilder concludes, “The test of the new global ganglia of computers and cables, worldwide webs of glass and light and air, is how readily they take advantage of unexpected contributions from free human minds in all their creativity and diversity. These high-entropy phenomena cannot even be readily measured by the metrics of computer science” (p. 46).
It’s not clear to Gilder that the AI geniuses of Silicon Valley are taking this in. The next Big Fix is always just around the corner and the Big Hype is always at hand.
Meanwhile, the rest of us can ponder an idea from technology philosopher George Dyson, “Complex networks—of molecules, people or ideas—constitute their own simplest behavioral descriptions.” (p. 53) He was explaining why analog quantum computers would work better than digital ones. But, considered carefully, his idea also means that you are ultimately the best definition of you. And that’s not something that a Big Fix can just get around.
Here’s the earlier article: Why AI geniuses think they can create true thinking machines. Early on, it seemed like a string of unbroken successes … In Gaming AI, George Gilder recounts the dizzying achievements that stoked the ambition—and the hidden fatal flaw.