Mind Matters Natural and Artificial Intelligence News and Analysis
beginning-of-the-game-two-chess-teams-in-front-of-different-color-white-and-black-on-the-chessboard-stockpack-adobe-stock.jpg
Beginning of the game, Two chess teams in front of different color white and black on the chessboard
Licensed by Adobe Stock

AI Flags “Black” and “White” Language of Chess as Racist

New research shows the weakness of depending on AI to accurately flag racist online content
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Last summer, artificial intelligence algorithms took down a video on YouTube’s most popular chess channel mid-livestream and flagged it as containing “harmful and dangerous” content.

New research into the incident indicates that artificial intelligence algorithms programmed to scan for racist and other hateful speech online may be to blame.

On June 27, 2020, the host of the most popular chess chanel on YouTube, Antonio Radić, Croatian chess player, was conducting a livestream with chess Grandmaster Hikaru Nakamura. Around the one-hour-and-twenty-minute mark of the discussion of chess, the video was cut off and removed from Radić’s channel.

When Radić (or anyone else) tried to access the video, they were met with a message from YouTube: “We’ve removed this video because it violates our Community Guidelines.” Radić submitted an appeal, but was immediately denied.

“It really is weird,” Radić told his viewers in a video the following day, explaining what was happening. “…I mean, I don’t even swear in my videos. There’s no profanity. There is no inappropriate content of any kind. And if such a podcast can be taken down, well it’s hard to even find the motivation to continue doing something like that.”

The video was restored after 24 hours. YouTube acknowledged that it should not have been removed but has not provided an explanation as to why it was removed in the first place.

Enter Carnegie Mellon University scientists. Project scientist (and fellow chess player) Ashique KhudaBukhsh and engineer Rupak Sarkar theorized that chess conversations — which place the words “black” and “white” within the context of battle — may have confused artificial intelligence programmed to identify hateful speech on YouTube.

This was Radić’s theory from the beginning: “if the YouTube algorithm was really an idiot, it maybe could have heard something like, ‘…maybe black goes to D6 instead of C6, white will always be better…’ And then maybe with the current situation in the world, if he hears something like that — ‘white will always be better’ — it might flag this video as inappropriate, harmful, or dangerous and take it down.”

“But if that’s the case,” he added, “then I’m pretty sure all of my 1800 videos will be taken down as it’s pretty much black against white, to the death every video. And I don’t think it’s supposed to work like that.”

KhudaBukhsh and Sarkar conducted an experiment, in which they used an algorithm to scan nearly 700,000 comments on over 8,000 chess videos to find out what the algorithm would identify as racist or hateful content.

After manually reviewing a selection of 1,000 comments that had been classed by the AI as hate speech, they found that 82 per cent of them had been misclassified due to the use of words like ‘black’, ‘white’, ‘attack’ and ‘threat’ – all of which (are) commonly used in chess parlance.

Anthony Cuthbertson, AI mistakes “black and white” chess chat for racism at The Independent

WIRED reported that:

The experiment exposed a core problem for AI language programs. Detecting hate speech or abuse is about more than just catching foul words and phrases. The same words can have vastly different meaning in different contexts, so an algorithm must infer meaning from a string of words.

Will Knight, Why a YouTube Chat About Chess Got Flagged for Hate Speech at WIRED

“Without a human in the loop,” KhudaBukhsh and Sarkar concluded, “relying on off-the-shelf classifiers’ predictions on chess discussions can be misleading.”

The limitations of AI have been pointed out before. Algorithms lack creativity so they they cannot interpret speech as a human would.

Chess Pieces on Board for Game and Strategy

“Fundamentally, language is still a very subtle thing,” said CMU professor Tom Mitchell. “These kinds of trained classifiers are not soon going to be 100 percent accurate.”

Unfortunately, AI’s limitations are not stopping Big Tech companies from relying on them to flag (and censor) hate speech online. According to WIRED:

In 2018, Mark Zuckerberg told Congress that AI would help stamp out hate speech. Earlier this month, Facebook said its AI algorithms detected 97 percent of the hate speech the company removed in the last three months of 2020, up from 24 percent in 2017. But it does not disclose the volume of hate speech the algorithms miss, or how often AI gets it wrong.

Will Knight, Why a YouTube Chat About Chess Got Flagged for Hate Speech at WIRED

KhudaBukhsh and Sarkar presented their findings at the Association for the Advancement of AI annual conference in February and won the Best Student Abstract Three-Minute Presentation.


You may also wish to read:

AI is no match for ambiguity. Many simple sentences confuse AI but not humans. Robert J. Marks:

and

Researchers disappointed by efforts to teach AI common sense. When it comes to common sense, can the researchers really dispense with the importance of life experience?


Caitlin Cory

Communications Coordinator, Discovery Institute
Caitlin Cory is the Communications Coordinator for Discovery Institute. She has previously written for Discovery on the topics of homelessness and mental illness, as well as on Big Tech and its impact on human freedom. Caitlin grew up in the Pacific Northwest, graduated from Liberty University in 2017 with her Bachelor's in Politics and Policy, and now lives in Maryland with her husband.

AI Flags “Black” and “White” Language of Chess as Racist