Mind Matters Natural and Artificial Intelligence News and Analysis
queen card.jpg
King and queen playing cards

Deepfake of Queen’s Christmas Message Highlights Era of Fake News

The concept is actually an old one and we are not helpless against such deceptions
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Elizabeth II is among the longest-serving constitutional monarchs in history (1953–). Britain’s edgy Channel 4 tested the waters with a deepfake Christmas address:

In Commonwealth countries like Canada, it is a longstanding custom to listen to Elizabeth’s Christmas Address. So how did the fake fare?:

If you have bad eyesight and limited hearing, you might, might, be fooled by the fake Queen on a busy Christmas day. But by the time she starts talking about Netflix and launches into a dance routine, you’d surely know something’s up. Channel 4 makes little effort to hide its deception, but that hasn’t stopped some critics from expressing discomfort with the stunt.

Rhett Jones, “First Deepfake Address from the Queen of England Makes Its Debut on British TV” at Gizmodo

Well, for one thing, the behavior was utterly unlike the Elizabeth of the past seven decades.

Okay, but deepfakes are getting better. They will be among us for some time—and so will efforts to detect them. For example, there is actually an industry challenge out there:

A climax of these efforts is this year’s Deepfake Detection Challenge. Overall, the winning solutions are a tour de force of advanced DNNs (an average precision of 82.56 percent by the top performer). These provide us effective tools to expose deepfakes that are automated and mass-produced by AI algorithms. However, we need to be cautious in reading these results. Although the organizers have made their best effort to simulate situations where deepfake videos are deployed in real life, there is still a significant discrepancy between the performance on the evaluation data set and a more real data set; when tested on unseen videos, the top performer’s accuracy reduced to 65.18 percent.

Siwei Lyu, “Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race” at Scientific American (July 20, 2020)

Could deepfakes influence elections (that’s been a question)? Well, maybe but there are simpler ways of influencing elections. For one thing, a deepfake could be shown to be false but propaganda is not detectable if people believe it.

Meanwhile, here are five things to know about the world of deepfakes:

➤ It’s not a new idea: “Psychological disruption and deceit aren’t new threats. China’s eminent sixth-century B.C. strategist, Gen. Sun Tzu, had a poet’s knack for the epigram — the ability to convey the complex in a succinct phrase. “All warfare is based upon deception,” he wrote. Italy’s Renaissance philosopher of coercive pragmatism, Niccolò Machiavelli, declared, “Though fraud in other activities may be detestable, in the management of war it is laudable and glorious, and he who overcomes the enemy by fraud is as much to be praised as he who does by force.”” – Austin Bay, Strategy Page (July 30, 2019)

➤ The software is quite easy to buy and use.

➤ Yes, deepfakes are definitely used for fraud. In one case, a voice deepfake tricked a manager into transferring thousands of dollars to a fraudster. It could be worse: “While someone using deepfake audio to pretend they’re the CEO of a company and getting that company’s accounting department to wire them $1 million because of an “emergency” is one thing, the tech could also be used for sabotage. What if one rival–or even a nation-state–wanted to sink Apple’s stock price? A well-timed deepfake audio clip that purports to show Tim Cook having a private conversation with someone about iPhone sales tanking could do just that–wiping billions off the stock market in seconds.” (Fast Company, July 19, 2019)

But that scenario assumes, of course, that Apple isn’t capable of fighting back which, we suspect, it is.

➤ Yes, there are ways of detecting deepfakes up front: “Researchers at the University of Surrey developed a solution that might solve the problem: instead of detecting what’s false, it’ll prove what’s true. Scheduled to be presented at the upcoming Conference on Computer Vision and Pattern Recognition (CVPR), the technology, called Archangel, uses AI and blockchain to create and register a tamper-proof digital fingerprint for authentic videos. The fingerprint can be used as a point of reference for verifying the validity of media being distributed online or broadcasted on television.” (PC Mag, June 19, 2019)

Tamper proofing, for example, can be “baked into” cameras.

More practically, here’s some advice on the ground:

Approach each image you see with skepticism. Does it come from a media outlet you recognize? Is the photographer credited? Does it have a caption that explains what’s happening in detail? All of these things can be faked, of course, but not without effort, and we’re trying to avoid getting taken in by bargain basement propagandists here. “I don’t like being fooled by people,” says James O’Brien, an expert in computer graphics and image and video forensics at UC Berkeley. “I think people should take that attitude. When you see the candidate you hate kicking puppies, stop and ask yourself where is this video coming from? How do I know it’s real?” If it confirms all your bitterest feelings on a subject, that is a sign of truthiness rather than truth.

Emma Grey Ellis, “How to Spot Phony Images and Online Propaganda” at Wired

Obviously, any tech savvy person today needs to be aware of these risks and plan for them. The golden rule, of course, is: When in doubt, doubt and if it sounds unbelievable, don’t believe it.

In the meantime, deepfakes will help make great thrillers, of course.


You may also enjoy: Here’s a deepfake version of U.S. past President Obama: See what you think.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Deepfake of Queen’s Christmas Message Highlights Era of Fake News