A new book suggests that the real danger of artificial intelligence is that it will remain dumber than we are.
AI’s basic problem concerns how computers process symbols, from the series of English letters one types on a keyboard to, more fundamentally, the strings of 0’s and 1’s into which those letters are encoded. The meanings of these symbols—indeed, even the fact that they are symbols—is not something the computer knows. A computer no more understands what it processes than a slide rule comprehends the numbers and lines written on its surface. It’s the user of a slide rule who does the calculations, not the instrument itself. Similarly, it’s the designers and users of a computer who understand the symbols it processes. The intelligence is in them, not in the machine.
As Smith observes, a computer can be programmed to detect instances of the word “betrayal” in scanned texts, but it lacks the concept of betrayal. Therefore, if a computer scans a story about betrayal that happens not to use the actual word “betrayal,” it will fail to detect the story’s theme. And if it scans text that does contain the word, but without deploying the concept of betrayal, the computer will erroneously classify it as a story about betrayal. Due to the rough correlation that exists between contexts in which the word “betrayal” appears, and contexts in which the concept is deployed, the computer will loosely simulate the behavior of someone who understands the word—but, says Smith, to suppose such a simulation amounts to real intelligence is like supposing that climbing a tree amounts to flying.Edward Feser, “Computer Pseudoscience” at City Journal
It is sobering to reflect on the possibility that public policy could be shaped by persons who are unfamiliar with such critical distinctions.
Dr. Feser is the author of Aristotle’s Revenge: The Metaphysical Foundations of Physical and Biological Science. (2019)
and more from Gary Smith:
Gary Smith explains why computers’ stupidity makes them dangerous. To take one example, computer algorithms failed Hillary Clinton in the 2016 election because the things they could not measure proved to be decisive factors. They can be misleading in medical research too because they don’t address the all-too-common Texas Sharpshooter Fallacies.