Mind Matters Natural and Artificial Intelligence News and Analysis
media interview

How Much Difference Can AI Deep Fakes Really Make in Elections?

Maybe not much and that truth should make us uncomfortable
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Deep fakes are “videos that have been constructed to make a person appear to say or do something that they never said or did. Many commentators worry that voters will be influenced by pure fiction: As the 2020 US election looms, we learn of the fight to stay ahead (CNN, April 26, 2019) of the “growing threat” (ABC Eyewitness News, May 7, 2019) because “The 2020 campaigns aren’t ready for deepfakes” (Axios/HBO, June 4, 2019).

“Deepfakes Are Coming. We Can No Longer Believe What We See, the New York Times warned yesterday. And today, we heard. “Alarming AI clones both a person’s voice and their speech patterns” (Futurism, June 11, 2019) .

More soberly, we are warned by an analyst:

Under the right set of circumstances, deepfakes will be very influential. They don’t even have to be particularly good to potentially swing the outcome of an election. As with so much in elections, deepfakes are a numbers game. While the presence of tampering in all but the most sophisticated deepfakes can be quickly identified, not everyone who views them will get that message.

More fundamentally, not everyone wants to get that message. As can occur with other forms of online misinformation, deepfakes will be designed to amplify voter misconceptions, fears, and suspicions, making what might seem outlandish and improbable to some people appear plausible and credible to others. To influence an election, a deepfake doesn’t need to convince everyone who sees it. It just needs to undermine the targeted candidate’s credibility among enough voters to make a difference.

John Villasenor, “Deepfakes, social media, and the 2020 election” at Brookings Institute

A bit of history: Fakes aren’t new to modern politics. George Orwell (1903–1950) wrote 1984 from experience with totalitarian regimes, not from a vivid imagination. Stalin and Hitler erased people from history, as a 2012 New York Metropolitan Museum of Art exhibition shows:

The idyllic but fake Potemkin village dates back to the eighteenth century. In the nineteenth century, aspiring performers were expected to hire a professional claque to act as a fake fan club.

Deception is hardly new but deep fakes may be quicker and cheaper than traditional dishonesty. But, in a free society, can they make as much difference as many fear (or hope)?

One analyst doubts that. Russell Brandom fears, “We’ve spent the last year wringing our hands about a crisis that doesn’t exist.” For one thing, most bad actors resort to flat out lying instead:

During the time deepfake tech has been available, misinformation campaigns have targeted the French elections, the Mueller investigation and, most recently, the Democratic primaries. Sectarian riots in Sri Lanka and Myanmar were fueled by fake stories and rumors, often deliberately fabricated to stoke hate against opposing groups. Troll campaigns from Russia, Iran, and Saudi Arabia have raged through Twitter, trying to silence opposition and confuse opponents.

Russell Brandom, “Deepfake propaganda is not a real problem” at The Verge

And there is a reason for that:

It’s a good question why deepfakes haven’t taken off as a propaganda technique. Part of the issue is that they’re too easy to track. The existing deepfake architectures leave predictable artifacts on doctored video, which are easy for a machine learning algorithm to detect. Some detection algorithms are publicly available, and Facebook has been using its own proprietary system to filter for doctored video since September. Those systems aren’t perfect, and new filter-dodging architectures regularly pop up. (There’s also the serious policy problem of what to do when a video triggers the filter, since Facebook hasn’t been willing to impose a blanket ban.)

Russell Brandom, “Deepfake propaganda is not a real problem” at The Verge

If we assume that any technology, no matter how sophisticated, leaves some evidence of its existence (the principle behind detecting forgery), it’s quite possible that detection will always be possible.

But, as both Brandom and Villasenor note, a key problem is that people may simply want to believe what the doctored video shows. In that case, video is less useful than gossip because, as Brandom says, “Most troll campaigns focused on affiliations rather than information, driving audiences into ever more factional camps. Video doesn’t help with that; if anything, it hurts by grounding the conversation in disprovable facts.”

Lawmakers in a free society who are very concerned about deepfakes might want to specify exactly how they are more dangerous than innuendo and insinuation which, by their very nature, are much harder to detect and address.


See also: AI dangers that are not just fake news


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

How Much Difference Can AI Deep Fakes Really Make in Elections?