Our Walter Bradley Center director Robert J. Marks is back with Jonathan Bartlett and Eric Holloway, are back, with the second instalment of 2020 smash hits in AI. Readers may recall that we offered a fun series during the holidays about the oopses and ums and ers in the discipline (typically hyped by uncritical sources). Time to celebrate the real achievements! Well, our nerds think that #5 is believable deep fakes in entertainment, for better or worse
Here’s a partial transcript. (Show Notes and Additional Resources follow, along with a link to the complete transcript.)
Robert J. Marks: Jon, what are deep fakes and what is Disney doing that’s going to wow us?
Jonathan Bartlett: People are worried about the potential for using them for evil and that’s definitely a worry because you could … I’ve seen deep fakes of people making Obama or Trump say all sorts of awful things. And if you weren’t aware of the technology, you might think that there really were videos like these that existed. They do make it hard for people to recognize truth from reality.
But there’s also a lot of practical applications people can do as well. It speeds up animation. You can think of animation as a giant deep fake project. And so the ability to do real-time deep fakes helps people do some filmography and some special things. But another really interesting advance in deep fakes is for compression. What some people have figured out is that you can, you can basically deep fake yourself. And basically with the deep fake technology, it requires fewer bits once you have a baseline image to translate the changes in your face and whatnot over the wire than it does to transmit actual video.
Note: How we can fight back against deepfakes: Let’s start by understanding how the technology works. It’s not that hard. Let’s start with a deepfake of Queen Elizabeth’s traditional Christmas message: Five things to expect.
Robert J. Marks: Okay. Let’s talk about compression. I usually explain compression as the idea that it’s like transporting dehydrated food. You take the water out, so it’s cheaper to ship. And then at the end you put the water back on at the destination. That’s what compression is motivated by. Is that right?
Jonathan Bartlett: That’s a really good analogy for it. Yeah. And the problem with compression in general is that there’s no general way to compress things. There’s no generalized algorithm that will compress any stream of bits. But the nice thing is that usually what we want to transmit is not any stream of bits. It’s usually very specialized streams of bits.
Robert J. Marks: What about zip files or PNG images? They use a common compression algorithm, don’t they?
Jonathan Bartlett: Exactly. So the compressions that we generally use is because the bit streams that we have in our files are not just any bit streams, they usually follow patterns. So for example, I can zip up my text file and make it really, really small because I’m using texts, which is only a subset of the bits available for what I’m doing and and then I’m writing them in words, which will make it more regular. And I’m putting those words into some of which are really common sentences, which make it compressible. So each of these levels of expectation allows you to compress your signal to some degree. And so basically deep fakes do, is they separate out at a really deep level. The bits that are background and the bits that are needed for the foreground. And honestly your mind actually does a deep fake as well.
Robert J. Marks: Oh, how is that?
Jonathan Bartlett: Our connection between our eyes and our brains are not as high bandwidth as you might imagine. And so basically when you look straight ahead, the optics are focused on what’s straight in front of you, but your mind is putting together a lot of what’s around. You’re actually seeing more than you can actually see because your mind is basically faking some of it for you. So anyway, so that’s what deep fakes do is they take a small amount of data and they separates out different pieces of it and can replace the parts of it that more or less matter.
- 00:31 | Introducing Jonathan Bartlett
- 00:40 | Introducing Dr. Eric Holloway
- 02:47 | #5: Deepfaking for Entertainment
- 10:12 | #3: Paralyzed Man Moves in Mind-Reading Exoskeleton
- 14:32 | #4: Deep Learning for leukocoria, or “white eye”
- 16:36 | #2: AI Beats Professionals in Six Player Poker
- 20:23 | #1: AI Cracks Protein Folding
- Jonathan Bartlett at Discovery.org
- Eric Holloway at Discovery.org
- #5: “Disney’s deepfakes are getting closer to a big-screen debut” (The VERGE)
- #4: “An App That Can Catch Early Signs of Eye Disease In A Flash” (NPR), “Eye-catching tech” (Waco Trib)
- #3: “Paralyzed Man Moves in Mind-Reading Exoskeleton” (BBC News)
- The Brain That Changes Itself by Norman Doidge, M.D.
- #2: “Carnegie Mellon and Facebook AI beats professionals in six-player poker” (Carnegie Mellon)
- #1: “Protein Folding: AI has cracked a problem that stumped biologists for 50 years. It’s a huge deal.” (VOX), “AlphaFold Scores Huge Breakthrough in Analyzing Causes of Disease” (Mind Matters News)