Mind Matters Natural and Artificial Intelligence News and Analysis
Shaved male nape.jpg
Shaved male nape and a lot of usb cables connected to it. Concept of dependence in thinking and information
Photo by Alex_Po on Adobe Stock

AI Expert: Artificial Intelligences Are NOT Electronic People

AI makes mistakes no human makes, so some experts are trying to adapt human cognitive psychology to machines
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

David Watson of the Oxford Internet Institute and the Alan Turing Institute has published an interesting and quite readable paper in Minds and Machines on the way in which artificial intelligence experts often endow their creations — mistakenly — with human characteristics. In his open access paper, “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence,” he fills us in on some of the limitations of AI and proposes fixes based on human thinking.

First, thinking that AI is like a human or about to become like a human is not new:

The biomimetic approach to AI has always inspired the popular imagination. Writing about Rosenblatt’s perceptron, the New York Times declared in 1958 that “The Navy has revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence” (New York Times1958, p. 25). The exuberance has only been somewhat tempered by the intervening decades. The same newspaper recently published a piece on DeepMind’s AlphaZero, a DNN that is the reigning world champion of chess, shogi, and Go (Silver et al. 2018). In the essay, Steven Strogatz describes the algorithm in almost breathless language:

“Most unnerving was that AlphaZero seemed to express insight. It played like no computer ever has, intuitively and beautifully, with a romantic, attacking style. It played gambits and took risks…. AlphaZero had the finesse of a virtuoso and the power of a machine. It was humankind’s first glimpse of an awesome new kind of intelligence.” (Strogatz 2018)

Watson, D., “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” at Minds & Machines (2019). 29, 417–440

He offers three limitations of AI, as expressed in deep neural networks (DNNs) like AlphaZero, which he terms “brittle, inefficient, and myopic,” limitations that are often not recognized:

First, he says, AIs in the form of DNNs “tend to break down in the face of minor attacks”:

In a landmark paper, Goodfellow et al. (2014) introduced generative adversarial networks (GANs), a new class of DNNs designed to fool other DNNs through slight perturbations of the input features. For instance, by adding just a small amount of noise to the pixels of a photograph, Goodfellow et al. (2015) were able to trick the high-performing ImageNet classifier into mislabeling a panda as a gibbon, even though differences between the two images are imperceptible to the human eye (see Fig. 3). Others have fooled DNNs into misclassifying zebras as horses (Zhu et al. 2017), bananas as toasters (Brown et al. 2017), and many other absurd combinations.

Watson, D., “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” at Minds & Machines (2019). 29, 417–440 (open access)

He adds, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.” It’s an open question whether we can give them this component if we are not even sure what it is.

Second, he notes that

Another important flaw with DNNs is that they are woefully data inefficient. High-performing models typically need millions of examples to learn distinctions that would strike a human as immediately obvious. Geoffrey Hinton, one of the pioneers of DNNs and a recent recipient of the ACM’s prestigious Turing Award for excellence in computing, has raised the issue himself in interviews. “For a child to learn to recognize a cow,” he remarked, “it’s not like their mother needs to say ‘cow’ 10,000 times” (Waldrop 2019). Indeed, even very young humans are typically capable of one-shot learning, generalizing from just a single instance. This is simply impossible for most DNNs, a limitation that is especially frustrating in cases where abundant, high-quality data are prohibitively expensive or difficult to collect. Gathering large volumes of labelled photographs is not especially challenging, but comparable datasets for genetics or particle physics are another matter altogether.

Watson, D., “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” at Minds & Machines (2019). 29, 417–440

Below is an example of the misidentification problem, courtesy Auburn University:

Watson discusses various workarounds but offers, “Promising though these strands of research may be, one-shot learning remains a significant challenge for DNNs.” Could the problem have to do with the fact that AI systems don’t “understand” — or feel any need to understand — what they are seeing? If so, is it true that they can be given such understanding? How?

And third, deep learning neural networks are “strangely myopic”:

The problem is most evident in the case of image classification. Careful analysis of the intermediate layers of convolutional DNNs reveals that whereas the lowest level neurons deal in pixels, higher level neurons operate on more meaningful features like eyes and ears, just as Hubel and Wiesel hypothesized (Olah et al. 2018). Yet even top performing models can learn to discriminate between objects while completely failing to grasp their interrelationships. For instance, rearranging Kim Kardashian’s mouth and eye in Fig. 4 actually improved the DNN’s prediction, indicating something deeply wrong with the underlying model, which performs well on out-of-sample data (Bourdakos 2017).

Watson, D., “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” at Minds & Machines (2019). 29, 417–440

We are told that “the technology is still in its infancy” and various fixes are suggested. But fundamentally, humans know what a face should look like in a way that goes beyond machine learning and that may make a difference in the long run.

Watson goes on to discuss three general fixes to make AI more like human thinking, fixes that borrow from human thinking: lasso penalties, bagging (bootstrap aggregating), and boosting. These fixes based on facets of human psychology:

Lasso penalties: “The basic intuition behind the lasso is that datasets are often intolerably noisy. We need some sensible method for eliminating variables that hinder our ability to detect and exploit signals of interest. The lasso is not the only way to achieve this goal… To the best of my knowledge, no research in lasso penalties has been explicitly motivated by connections to the cognitive process of sensory gating. Yet the success of this statistical technique can be at least partly explained by the fact that it implements a strategy that is essential to human intelligence.” Essentially, when we need to make a decision, we filter out “noise” in favor of relevant factors.

Bagging: “The success and broad applicability of bagging should come as no surprise to anyone familiar with the so-called “wisdom of crowds”… Condorcet’s jury theorem (1785) states that any verdict reached by a set of independent and better than random jurors is more likely to be correct than the judgment of any individual juror. Moreover, the probability of a correct majority judgment approaches 1 as the jury size increases. Galton famously reported in 1907 that observers at a county fair accurately guessed the weight of an ox—not individually, but in aggregate, when their estimates were averaged (Galton 1907). Faith in humanity’s collective wisdom arguably undergirds all free markets, where information from a variety of sources is efficiently combined to determine the fair price of assets (Fama 1965). Crowd sourcing has recently become popular in the natural sciences, where online enthusiasts have helped map the neural circuitry of the mammalian retina (Kim et al. 2014) and discover new astronomical objects (Cardamone et al. 2009; Watson and Floridi 2018).” Of course, one must distinguish wise from irrational crowds and Watson cites The Wisdom of Crowds (2004) by James Surowiecki for distinguishing factors ( (1) diversity of opinion; (2) independence; (3) decentralization; (4) aggregation; and (5) trust). In any event, the feature that AI programmers seek is part of everyday human life.

Boosting This process, again, is similar to a process cognitive scientists call predictive coding: “According to this theory, human perception is a dynamic inference problem in which the brain is constantly attempting to classify the objects of phenomenal experience and updating predictions based on new sensory information… Predictive coding has also been conceptualized as a sort of backpropagation algorithm (Whittington and Bogacz 2019), in reference to the method by which neural network parameters are trained. In both routines, forward passes carry predictions and backward passes carry errors. Through iterative refinement, the system — biological or synthetic — attempts to converge on a set of maximally accurate predictions.”

Can machines be taught to think like humans, using these procedures, in the absence of consciousness? We shall see. Meanwhile, Watson cautions against taking any of these “metaphors and analogies” too literally, especially where issues like algorithms to determine creditworthiness or likelihood of comitting a crime are concerned:

Algorithms can only exercise their (artificial) agency as a result of a socially constructed context in which we have deliberately outsourced some task to the machine. This may be more or less reasonable in different situations. Software for filtering spam emails is probably unobjectionable; automated systems for criminal sentencing, on the other hand, raise legitimate concerns about the nature and meaning of justice in an information society. In any event, the central point — one as obvious as it is frequently overlooked — is that it is always humans who choose whether or not to abdicate this authority, to empower some piece of technology to intervene on our behalf… The anthropomorphic impulse, so pervasive in the discourse on AI, is decidedly unhelpful in this regard.

Watson, D., “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.” at Minds & Machines (2019). 29, 417–440 (open access)

That raises the question, to what extent is the temptation to outsource the ethical problem to a machine driven by a desire to avoid addressing it by attempting to cite AI as an authority? Whatever people may believe about whether AI can think like people, it is not likely to be regarded as a very convincing authority.

Not when everyone knows that what’s behind AI is people all the way down.

You may also enjoy:

AI will fail, like everything else, eventually The more powerful the AI, the more serious the consequences of failure Overall, we predict that AI failures and premeditated malevolent AI incidents will increase in frequency and severity proportionate to AIs’ capability.

and

AI is no match for ambiguity. Many simple sentences confuse AI but not humans


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

AI Expert: Artificial Intelligences Are NOT Electronic People