Mind Matters Natural and Artificial Intelligence News and Analysis
pink-neon-sign-dont-quit-stockpack-adobe-stock.jpg
Pink Neon sign 'Don't quit'

Are Deepfakes Too Deep for Us? Or Can We Fight Back?

Keeping up with the fakers is becoming more of a challenge
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Since 2014, there has been a new twist to misrepresentation in politics: deepfakes—computer-generated images that seem quite real. Adam Garfinkle of Singapore’s Nanyang Technological University explains how the technology, generative adversarial networks (GANs), works:

A GAN operator pits a generator (G) against a discriminator (D) in a gamelike environment in which G tries to fool D into incorrectly discriminating between fake and real data. The technology works by means of a series of incremental but rapid adjustments that allows D to discriminate data while G tries to fool it.

Adam Garfinkle, ““Disinformed”” at Inference Review

Once the problem is reduced to a giant calculation, a giant computer learns much more quickly than the rest of us. And it can then be turned around to generate convincing fakes.

Despite what we might fear, most uses of deepfakes are not attempts to deceive:

GANs can reconstruct three-dimensional images from two-dimensional photographs. They can be used to visualize industrial design, improve astronomical images by filling in statistically what real cameras cannot capture, and generate showers of imaginary particles for high-energy physics experiments. GANs can also be used to visualize motion in static environments, which could help find people lost or hiding in forests or jungles. In 2016, GAN technology was used to generate new molecules for a variety of protein targets in cells implicated in fibrosis, inflammation, and cancer.

Adam Garfinkle, ““Disinformed”” at Inference Review

But just as we didn’t realize that some of those stars in the astronomy pictures were fake (as opposed to mere holes in the picture), we might be fooled about things that really matter:

What makes GANs frightening is their power to produce photographic images of people who do not exist, or to generate video from voice recordings, or to doctor images of people who do exist to make them seem to be someone else, or to say things they never did or would say. GANs can be used to create pornography by using an image without the subject’s knowledge or consent.

Adam Garfinkle, ““Disinformed”” at Inference Review

Of 15,000 deepfakes images detected by one program by September 2019, Garfinkle tells us, 96% were pornography. Some deepfakes involved identity fraud where the victim lost money.

Since 2014, deepfakes have gotten ever more convincing:

At first, catching deepfakes wasn’t too hard – even the best ones had visual giveaways like blurring, distortion, and uncanny facial differences that made them just seem “off.”

It’s a cat-and-mouse game, though, and seems that as soon as we learn one method for detecting deepfakes, the next generation fixes the flaw.

Andrew Braun, “Can Deepfakes Be Detected” at maketecheasier (October 22, 2019)

Detection is still possible but, says techie Andrew Braun, increasingly, certainty requires high tech analysis:

… the techniques have now improved to the point where these artifacts are only visible to other algorithms combing through the video data and examining things on the pixel level. Some of them can get pretty creative, like one technique that checks to see if the direction of the nose matches the direction of the face. The difference is too subtle for humans to pick up on, but machines turn out to be pretty great at it.

Andrew Braun, “Can Deepfakes Be Detected” at maketecheasier (October 22, 2019)

The machines are still bad at natural facial movements:

The second weak spot is focused on the movement of the mouth, and if it correlates well with the movements in the face and body. One study found that most deepfake software will manipulate video on a frame-by-frame basis and don’t enforce temporal coherence – a weak spot which can be detected and exploited.

Arnold, “Can Deepfakes Be Detected By An Algorithm Or Software?” at DeepFakeNow (April 21, 2020)

But with all the time and money available, the machines will get better. So will the machines that pick up their mistakes. A newer proposed technique (paper) analyzes the patterns created by blood flow in a real human being:

These edited videos can be extremely difficult to detect, but researchers have suggested that examining how blood moves around the face could indicate what is real and what is fake, since deepfakes cannot replicate it with high enough fidelity.

“Biological signals hidden in portrait videos can be used as an implicit descriptor of authenticity, because they are neither spatially nor temporally preserved in fake content,” the research, published in IEEE Transactions on Pattern Analysis and Machine Learning, states.

Adam Smith, “Strange blood flow is the secret to detecting deepfakes, new research suggests” at Independent (October 2, 2020)

That’s a bigger problem for deepfakers than syncing realistic mouth movements. Real people have live beating hearts and images don’t:

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

Mark Frauenfelder, “A person’s heartbeat can be used to detect deepfakes” at BoingBoing (September 2020)

The deepfakes will probably improve in this area too but, in general, anything one human being can figure out so as to fool us, another human being can figure out so as to disabuse us. So the tech industry is fighting back. But it faces a dilemma keeping the detection methods a secret from the deepfakers:

Facebook is also promising a deepfake detector, but plans to keep the source code closed. One problem with open-sourcing deepfake detectors… is that deepfake generation developers can use the detector as the discriminator in a GAN to guarantee that the fake will pass that detector, eventually fueling an AI arms race between deepfake generators and deepfake detectors.

Martin Heller, “What are deepfakes? AI that deceives” at InfoWorld (September 15, 2020)

So here’s the dilemma: If the method were secret, Facebook would know an image was fake but others would not know how Facebook knew that. We would all have to take their word for it. That depends on how much we trust Facebook. Microsoft is also working on a deepfake detection tool.

Progress in the area also assumes that the “industry” is universally committed to catching and exposing deep fakes. Maybe not. All industries by definition have self-interest or they would not exist. They might tolerate deep fakes that libel a hated figure and celebrate a beloved one.

A technology that puts control in the hands of the owners of an authentic image might be more reliable. Some look to blockchain to protect their images from tampering:

That’s what companies like Factom, Ambervideo, and Axiom are doing by encoding data about videos onto immutable blockchains.

The basic idea behind a lot of these projects is that the data contained in a video file or generated by a certain camera can be used to generate a unique signature that will change if the video is tampered with. Eventually, videos uploaded to social media might come with an option to generate an authentication code that the original uploader could register on a blockchain to prove that they were the original video owners.

Andrew Braun, “Can Deepfakes Be Detected” at maketecheasier (October 22, 2019)

Sure but that’s higher tech than many today can manage. Tech companies could, of course, make it easier for us by marketing such protections as a service and then eventually as a feature. We might be wiser to hope for that outcome than for industry conscience alone to do all the necessary work.

Meanwhile, is there anything the rest of us can do to sharpen our fakespotting skills? Some tips are offered at DeepFakeNow:

Here’s one of five:

Next to being critical, verifying a story is as simple as a Google Search. Don’t consume news content from a single news source, but consume media content from news sources with different political orientations. Balancing out both news sources will most likely lead you towards a (relative) truth.

Even better would be taking the classic journalism approach. Going straight to the source of the news to verify a story is now easier than ever with the rise of social media platforms.

Check out what the person that is the center of the news scoop has to say about it. Do they refute claims made in the media? Do they confirm what is being said? You obviously cannot pick up the phone and call them, or pay them a visit in their homes. But if the source of the news makes a public statement about the news that is currently circulating, chances are that truth can be found in these types of statements.

Arnold, “5 Ways To Detect And Recognize Deepfakes (As An Average Person)” at DeepFakeNow (May 16, 2020)

We should be especially cautious if scandalous footage suddenly emerges at a critical time, showing a public figure saying or doing things that will be widely criticized but are genuinely out of character and perhaps not even hinted at in the past. Absent corroboration, the footage may be right out there with the latest Aliens Land! vid.

A historical perspective might help: Only in comparatively recent history have most of us had access to a great deal of “information” anyway. At one time, all books were copied slowly and expensively by hand. Deep thinkers thought up great ideas and most of us got by on folk wisdom, life experience, common sense, and rumours. So deepfakes—false tales or video forgeries, if you like—are another outcome of the rise of information technology. How much difference they will make depends, as always, in part on our ability to spot video forgeries and in part on cultivating good judgment and character.


You may also enjoy: AI in war means deepfakes as well as killerbots. In its Gerasimov and Primikov doctrines of warfare, Russia makes this clear. (Denise Simon)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Are Deepfakes Too Deep for Us? Or Can We Fight Back?