Mind Matters Natural and Artificial Intelligence News and Analysis
Cropped shot of call center operator in headset working and talking with client
Cropped shot of call center operator in headset working and talking with client

Why machines can’t think as we do

As philosopher Michael Polanyi noted, much that we know is hard to codify or automate
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Michael Polanyi (1891-1976)

Recently, we looked at Moravec’s Paradox, the fact that it is hard to teach machines to do things that are easy for most humans (walking, for example) but comparatively easy to teach them things that are challenging for most humans (chess comes to mind).

Another paradox worth noting is Polanyi’s Paradox, named in honor of philosopher Michael Polanyi (1891-1976), who developed the concept of “tacit knowledge”:

Central to Michael Polanyi’s thinking was the belief that creative acts (especially acts of discovery) are shot-through or charged with strong personal feelings and commitments (hence the title of his most famous work Personal Knowledge). Arguing against the then dominant position that science was somehow value-free, Michael Polanyi sought to bring into creative tension a concern with reasoned and critical interrogation with other, more ‘tacit’, forms of knowing.

Polanyi’s argument was that the informed guesses, hunches and imaginings that are part of exploratory acts are motivated by what he describes as ‘passions’. They might well be aimed at discovering ‘truth’, but they are not necessarily in a form that can be stated in propositional or formal terms. As Michael Polanyi (1967: 4) wrote in The Tacit Dimension, we should start from the fact that ‘we can know more than we can tell‘. Mark K. Smith, “Michael Polanyi and tacit knowledge ” at Infed

Here’s the Paradox, as formulated by law professor John Danaher, who studies emerging technologies, at his blog Philosophical Disquisitions:

We can know more than we can tell, i.e. many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate.

We have all encountered that problem. It’s common in healthcare and personal counseling. Some knowledge simply cannot be conveyed—or understood or accepted—in a propositional form. For example, a nurse counselor may see clearly that her elderly post-operative patient would thrive better in a retirement home than in his rundown private home with several staircases.

The analysis, as such, is straightforward. But that is not the challenge the nurse faces. Her challenge is to convey to the patient, not the information itself, but her tacit knowledge that the proposed move would liberate, rather than restrict him. She may face powerful cultural and psychological barriers in communicating that knowledge to him if he perceives the move as a loss of independence, pure and simple.

Artificial intelligence may help, of course. He may see virtual tours of lifestyle options as less threatening than actual tours. But, in the end, he must gain tacit knowledge of the fact that in a barrier-free environment, he will be freer to do as he wishes. And he must gain it by some method other than a simple transfer of information from the nurse counselor to himself. Human life is full of these challenges.

David H. Autor

For that reason, MIT economist David Autor doubts that machines can simply replace humans in most jobs:

Why are these middle-skill jobs likely to persist and, potentially, to grow? My conjecture is that many of the tasks currently bundled into these jobs cannot readily be unbundled—with machines performing the middle-skill tasks and workers performing the residual—without a substantial drop in quality. Consider, for example, the commonplace frustration of calling a software firm for technical support only to discover that the support technician knows nothing more than what is on his or her computer screen—that is, the technician is a mouthpiece, not a problem-solver. This example captures one feasible division of labor: machines performing routine technical tasks, such as looking up known issues in a support database, and workers performing the manual task of making polite conversation while reading aloud from a script. But this is not generally a productive form of work organization because it fails to harness the complementarities between technical and interpersonal skills. Stated in positive terms, routine and nonroutine tasks will generally coexist within an occupation to the degree that they are complements—that is, the quality of the service improves when the worker combines technical expertise and human flexibility. David H. Autor, “Polanyi’s Paradox and the Shape
of Employment Growth
” at National Bureau of Employment Research (Working Paper No. 20485)

Of course, one outcome is that people skills will become proportionately more important in tomorrow’s workforce, not less. The nurse who can communicate tacit knowledge so as to help a patient through difficult transition is not likely to be automated any time soon.

Hat tip: strong>Richa Bhatia at Anaytics India

See also: Why can’t machines learn simple tasks?: They can learn to play chess more easily than to walk. If explicitly human intelligence is related to the hard problem of consciousness, the robotics engineers might best leave consciousness out of their goals for their products and focus on more tangible ones.


Why machines can’t think as we do