Computer scientist Jeffrey Shallit takes issue with my parable (September 8, 2018) about “machine learning.” The tale features a book whose binding cracks at certain points from the repeated use of certain pages. The damage makes those oft-consulted pages easier for the next user to find. My question was, can the book be said to have “learned” the users’ most frequent needs?
I used the story about the book to argue that “machine learning” is an oxymoron. Like the book, machines can “learn” only metaphorically, not in reality. Machines don’t have minds. Machines can change with repeated use, by design or by happenstance (as in the case of the book). They can become more effective tools because of such changes. But machines don’t learn, because learning—which is the acquisition of new knowledge—is something unique to creatures with minds, like human beings.
Shallit, however, argues that a computer is not just a machine, but something quite special:
To be genuinely considered a “computer”, a machine should be able to carry out basic operations such as comparisons and conditional branching. And some would say that a computer isn’t a real computer until it can simulate a Turing machine. A book with a cracked binding isn’t even close.
Of course, my parable was an analogy between a book and a computer. That was, in fact, my point. Even the most rudimentary device—a device far less complex than a computer— can be said to “learn” metaphorically through repeated use. That does not mean that it really learns, but only that it changes in a way that reminds us of learning.
The same metaphorical learning—not genuine learning—that a book with cracked binding undergoes is what happens when computers “learn.” With a computer, the process of change is harder to see because of the inherent complexity of computation, but the principle is the same. Machines of any sort can change over time in a way that makes them better tools. That does not mean that the machines “learn.” They change in ways that help us, and we (metaphorically) call that “learning” because they are more useful to us as a result.
Another analogy may help. When I was a kid, I had a great baseball glove. I bought it new, and for the first few weeks, I wore it all the time—to bed, around the house, etc. I also oiled it with special oil to make it more pliable. As I wore it and played baseball with it, it gradually conformed more and more closely to the shape of my hand and to the shape of the baseball. It became a great glove—a more effective tool because it changed as I used it.
In a metaphorical sense, one might say that my glove “learned” the shape of my hand and the shape of the baseball. But only metaphorically. The glove obviously didn’t learn anything; I learned to be a better baseball player by using it. People learn to play baseball, using tools that adapt. Baseball gloves don’t learn anything by adapting.
It’s the same with all machine learning. Baseball gloves are leather devices used by humans to play baseball. Computers are electromechanical devices used by humans to map inputs to outputs according to an algorithm. They are inanimate tools, and they don’t have minds of any sort. The way computers work can change with time, and thus we say that they ‘learn’ metaphorically, just as a kid might say that his baseball glove “learns” to be a better glove with use.
But “machine learning” is just a metaphor, and we must be careful not to mistake our metaphors for metaphysics.
Also by Michael Egnor: Can machines really learn? A parable of a book that learned
The brain Is not a “meat computer”: Dramatic recoveries from brain injury highlight the difference
Dr. Egnor is a neurosurgeon, professor of Neurological Surgery and Pediatrics and Director of Pediatric Neurosurgery, Neurological Surgery, Stonybrook School of Medicine