Mind Matters Natural and Artificial Intelligence News and Analysis
3d-rendering-of-a-bunch-of-laughing-emojis-surrounding-one-angry-stockpack-adobe-stock.jpg
3D rendering of a bunch of laughing emojis surrounding one angry

“Sentiment Analysis”? Why Not Just ASK People What They Think?

My computer science professor always told me,”Never solve a problem you can eliminate.”
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

AI researchers are trying to develop algorithms that pick up on cues within a written text that reveal the writer’s emotional state (sentiment analysis). Recently, Mind Matters News reported on a new algorithm for processing sarcasm in social media posts — a good example of trying to infer sentiment from text.

My computer science professor always told me,”Never solve a problem you can eliminate.” It seems to me that a lot of machine sentiment analysis can be bypassed by simply asking the users to report their feelings when writing the posts.

That may seem obvious but, these days, obvious answers are in short supply. Many people insist on finding the most complicated way to solve problems.

Asking a user for their sentiment creates a solution that is 100% effective, can be done in an hour by a single junior developer, and requires no maintenance. Sentiment analysis, on the other hand, requires a continually tuned algorithm which must be planned, analyzed, and maintained by high-dollar data scientists — or outsourced to a third party.

Additionally, asking a user to describe their current state of mind has other well-known benefits.

3d Emojis icons with facial expressions.

First, it provides an emotional outlet. A user who is angry can simply check, “I’m really mad!” Many companies have found that, when a separate space if offered for communicating sentiment, the actual information communicated — in a separate box — is much more productive and informative.

For example, when there is no chance to sound off about feelings (sentiment), the user will say something like, “I hate your software! It’s always buggy and breaks when I least expect it!!!!!”

This is absolutely useless information for the recipient because it is not specific. But the sender feels the need to talk that way in case the recipient doesn’t properly understand how distressed they feel. Give them a place to indicate their sentiment and they are much more likely to 1) click “I’m really hopping mad!” and then 2) say, in the main text, “when I click on X button, I get Y error.”

Second, when user sentiment is communicated separately from an explanation of the specific problem, customer service representatives can more effectively address both the emotional sentiment and the problems the user is experiencing with the product.

Having a computer try to guess at the user’s sentiment actually causes more problems. The customer service representative might rely on the computer’s guesses rather than use personal intuition — and misunderstand the situation, then blame it on the machine.

A user would then be quite befuddled by the representative who thinks that the user is angry (or happy or sad) when — so far as the user knows — the message they sent was not intended to convey such a sentiment and did not convey it (but that’s what the algorithm supposedly shows). However, users who are given a chance to mark a given sentiment themselves will likely find that the alert customer service representative responds appropriately.

In short, running small snippets of text through a sentiment filter seems like a lot of work to give a fancy-sounding solution — but really only a half-solution — to an issue that would be much more straightforwardly fixed by simply asking the user to select their own sentiment from a range of options.

I’m not saying that sentiment analysis is no use. But the majority of cases cited so far seem to be solutions in search for a problem, which may mean consultants in search of a sophisticated product to recommend when a simpler, probably cheaper one might work better.


Here’s Robert J. Marks’s take on the same paper:

Can the machine know you are just being sarcastic? Researchers claim to have come up with an artificial intelligence program that can detect sarcasm on social media platforms. Marks is skeptical because teasing apart ambiguities, which are part of sarcasm, appear to be beyond the ability of artificial intelligence.

You may also enjoy:

Flubbed headlines: New challenge for AI common sense. The late Paul Allen thought teachings computers common sense was a key AI goal. To help, I offer the Flubbed Headline Challenge. I propose a new challenge: Teach computers to correctly understand the headline “Students Cook and Serve Grandparents”

and

Researchers disappointed by efforts to teach AI common sense. When it comes to common sense, can the researchers really dispense with the importance of life experience? Yuchen Lin and research colleagues found that AI performs much more poorly on intuitive knowledge/common sense questions than many might expect.


Jonathan Bartlett

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Jonathan Bartlett is a senior software R&D engineer at Specialized Bicycle Components, where he focuses on solving problems that span multiple software teams. Previously he was a senior developer at ITX, where he developed applications for companies across the US. He also offers his time as the Director of The Blyth Institute, focusing on the interplay between mathematics, philosophy, engineering, and science. Jonathan is the author of several textbooks and edited volumes which have been used by universities as diverse as Princeton and DeVry.

“Sentiment Analysis”? Why Not Just ASK People What They Think?