Some people think machine recognition of emotions is the Next Big Thing:
Humans have always had the innate ability to recognize and distinguish between faces. Now computers are able to do the same. This opens up tons of applications… We have also created a pipeline for detection, recognition and emotion understanding on any input image with just 8 lines of code after the images have been loaded! Our code is open sourced on Github.Priya Dwivedi, “Face Detection, Recognition and Emotion Detection in 8 lines of code!” at Towards Data Science
We are told at Technology Review that emotion recognition is set to become a $25 billion industry:
Huge companies like Microsoft and Apple, as well as specialized startups like Kairos and Affectiva, are all taking part. Though most commonly used to sell products, emotion recognition technology has also popped up in job recruiting and as a possible tool for figuring out if someone is trying to commit insurance fraud. Back in 2003, the US Transportation Security Administration started training humans to spot potential terrorists by “reading” their facial expressions, so it’s easy to imagine an artificial-intelligence project attempting the same thing. (The TSA program was widely criticized for being based on poor science.)Angela Chen, “Computers can’t tell if you’re happy when you smile” at Technology Review
But of what value is the output? Recent research suggests that the technology is seriously flawed. Five accomplished scientists representing different camps reviewed over a thousand studies in the field, wondering if they could reach a consensus. But they did (their paper is free online) and, essentially, there seems no clear science basis for the claims made:
“Companies can say whatever they want, but the data are clear,” Lisa Feldman Barrett, a professor of psychology at Northeastern University and one of the review’s five authors, tells The Verge. “They can detect a scowl, but that’s not the same thing as detecting anger.”James Vincent, “AI emotion recognition can’t be trusted” at The Verge
Essentially, there is no reliable relationship between the facial expressions of assorted individuals at a given time and their emotional states. Two problems researchers have identified are
1) In most real-life situations, additional cues help us assess a person’s emotional state.
2) Psychological tests of facial expressions’ relationship to our emotions feature exaggerated examples and then demand that we select one option vs. another (anger vs. disgust, for example). In a natural situation, we might prefer to make no such judgment. If we are forced to choose, a false positive is entered into a database for machine learning.
Some sources offer suggestions for how the system can be improved:
Barrett has suggestions for how to do emotion recognition better. Don’t use single photos, she says; study individuals in different situations over time. Gather a lot of context—like voice, posture, what’s happening in the environment, physiological information such as what’s going on with the nervous system—and figure out what a smile means on a specific person in a specific situation. Repeat, and see if you can find some patterns in people with similar characteristics like gender. “You don’t have to measure everybody always, but you can measure a larger number of people that you sample across cultures,” she says.Angela Chen, “Computers can’t tell if you’re happy when you smile” at Technology Review
But not everyone is happy with the invasion of privacy that the new technologies enable in any event. For example, Amazon has developed an affordable new system for businesses:
Amazon declined to detail how customers are using emotion recognition. Online documentation for Rekognition warns that the service “is not a determination of the person’s internal emotional state and should not be used in such a way.” But on its Rekognition website, Amazon, whose ecommerce business has squeezed brick-and-mortar retailers in part via deep data on consumers, suggests that stores could feed live images of shoppers into its face-analysis tools to track emotional and demographic trends at different retail locations over time.
Even as Amazon, Google, and Microsoft charge ahead with algorithms that intuit feelings, psychologists warn that trying to read emotions from facial expressions is fundamentally misguided.Casey Chin, “Amazon says it can detect fear on your face. You scared?” at Wired
Indeed. And here’s a timely warning:
With machine learning, in particular, we often see metrics being used to make decisions—not because they’re reliable, but simply because they can be measured. This is a technology that excels at finding connections, and this can lead to all sorts of spurious analyses: from scanning babysitters’ social media posts to detect their “attitude” to analyzing corporate transcripts of earnings calls to try to predict stock prices. Often, the very mention of AI gives an undeserved veneer of credibility.James Vincent, “AI emotion recognition can’t be trusted” at The Verge
In that case, machine learning risks becoming the Gossip II, with all the credibility that that implies.
Further reading on machine recognition of emotions: You have just six emotions! At least it would be easier for the machines if we did.
Chinese technocracy surges ahead with AI surveillance So what do the reservations expressed, about “the soul” and “love,” really mean?
Also: Why it’s so hard to reform peer review The temptation to measure what’s measurable can get in the way of grasping what’s real. (Robert J. Marks)
Featured image: Emotion recognition foiled/Africa Studio, Adobe Stock