Mind Matters Natural and Artificial Intelligence News and Analysis
border-collie-dog-catching-frisbee-in-jump-stockpack-adobe-stock.jpg
Border collie dog catching frisbee in jump

Researchers Disappointed By Efforts to Teach AI Common Sense

When it comes to common sense, can the researchers really dispense with the importance of life experience?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A recent experiment showed that AI still does not show common sense:

“Current machine text-generation models can write an article that may be convincing to many humans, but they’re basically mimicking what they have seen in the training phase,” said [PhD student Yuchen] Lin. “Our goal in this paper is to study the problem of whether current state-of-the-art text-generation models can write sentences to describe natural scenarios in our everyday lives.”

University of Southern California, “New test reveals AI still lacks common sense” at ScienceDaily The paper is open access.

Essentially, fake news bots can sound like the New York Times or marketing copy by generating mimics, after taking in thousands of natural examples. But it isn’t thinking about any of it. Does that matter?

Specifically, Ren and Lin tested the models’ ability to reason and showed there is a large gap between current text generation models and human performance. Given a set of common nouns and verbs, state-of-the-art NLP computer models were tasked with creating believable sentences describing an everyday scenario. While the models generated grammatically correct sentences, they were often logically incoherent. For instance, here’s one example sentence generated by a state-of-the-art model using the words “dog, frisbee, throw, catch”:

“Two dogs are throwing frisbees at each other.”

The test is based on the assumption that coherent ideas (in this case: “a person throws a frisbee and a dog catches it,”) can’t be generated without a deeper awareness of common-sense concepts.

University of Southern California, “New test reveals AI still lacks common sense” at ScienceDaily The paper is open access.

A human does not need to be told that two dogs are not throwing frisbees at each other. AI does not know that unless it somehow gets programmed in.

Jonathan Bartlett offers explains the difficulty from a programmer’s perspective:

What makes software writing difficult is that we are used to making constant decisions based on how we think the world is expecting us to work. If I ask someone to wash the dishes, I don’t have to get incredibly detailed on how the process works. They can look at a dish, and think about what needs to happen, and do the right thing. I don’t have to explain exactly what I mean by clean or dirty, I don’t have to explain the difference between the plate and something that is dried onto the plate. I don’t have to explain that they shouldn’t break the plates. The person doing the dishes understands all of these things implicitly and can carry them out effectively without a lot of instruction.

Computers, however, are different. Computers have no intuitive notions. They will do what you ask them to, and they will continue to do so whether it “makes sense” or not, because everything makes sense to a computer. When automating a task, very subtle decisions have to be made at every point. What makes a programmer valuable is not their knowledge of any particular language or arcane interface. What makes a programmer valuable is their ability to recognize the extremely subtle impacts of all of their decisions along the way.

Jonathan Bartlett, “The Myth of “No Code” Software (Part I)” at Mind Matters News

Lin and his colleagues on the research paper found a much lower rate of common sense than is often claimed:

To evaluate different machine models, the pair developed a constrained text generation task called CommonGen, which can be used as a benchmark to test the generative common sense of machines. The researchers presented a dataset consisting of 35,141 concepts associated with 77,449 sentences. They found the even best performing model only achieved an accuracy rate of 31.6% versus 63.5% for humans.

“We were surprised that the models cannot recall the simple commonsense knowledge that ‘a human throwing a frisbee’ should be much more reasonable than a dog doing it,’ said Lin. “We find even the strongest model, called the T5, after training with a large dataset, can still make silly mistakes.”

University of Southern California, “New test reveals AI still lacks common sense” at ScienceDaily The paper is open access.

One hitch is that, in humans, common sense comes from life experience, not from searching a large amount of text. Much can be extrapolated from just experiencing dogs, getting to know what a dog is. Reading about dogs may be helpful but is not necessary.

However, life experience is precisely what AI does not have. So the question is, when it comes to common sense, can the researchers really dispense with the importance of life experience?

Lin remains hopeful: “By introducing common sense and other domain-specific knowledge to machines, I believe that one day we can see AI agents such as Samantha in the movie Her that generate natural responses and interact with our lives.” Very well but then how do we introduce common sense? We are back where we started.

Note: Ambiguous quests that baffle AI because it cannot access life experience are called Winograd schemas. Robert J. Marks offers a number of examples, with commentary, here.


You may also enjoy: AI is no match for ambiguity. Many simple sentences confuse AI but not humans (Robert J. Marks)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Researchers Disappointed By Efforts to Teach AI Common Sense