When our Walter Bradley Center director, Robert J. Marks, was discussing with Eric Holloway the events that really made a difference in AI, one very interesting issue that came up was the use of deepfakes to substitute for actors in films.
Robert J. Marks: Eric, how is Disney using deep fakes in entertainment?
Eric Holloway: Well, Disney is using deep fakes and entertainment as a way to capitalize on not having to hire lots of really expensive actors. So you can have a few expensive actors, they do their thing, and then you copy their body movements and face. And now you can just hire a bunch of cheap actors and stick the expensive actors faces on them. Or you can go in other directions like you can stick cartoon characters on them and you can make animation a lot simpler for cartoon characters because now you can use human bodies to do your animation for you and then just throw a cartoon suit on them virtually. So there’s a lot of possibilities here.
Actors know about this and they are not happy about it:
In a recent editorial piece published in the New York Daily News, SAG-AFTRA’s president, Gabrielle Carteris, criticized the Motion Picture Association of America (MPAA) and the Entertainment Software Association (ESA) for opposing legislation introduced by two New York state politicians that would give actors rights to their digital personas.
The union warned that without such rights, actors’ bodies, voices, and personalities can be lifted from their screen work and manipulated into footage they do not approve of and don’t get any compensation for, including deepfake pornography. While actors have a heightened risk of being manipulated into deepfake footage, the union emphasized any person with a social media account could also have their likeness manipulated.Alex Ates, “Is That Actor Real? The Lowdown on Deepfakes” at Backstage Magazine (June 17, 2019)
But the problem is complex:
Under existing defamation law, for example, tabloids regularly get away with running seemingly fake or heavily exaggerated stories about celebrities because it is difficult to prove that the publishers made false statements with “actual malice.” Furthermore, to be actionable, the defamation must also result in actual injury. Thus, in the absence of provable malice and harmful effects, defamation law may not protect victims of deepfake videos, especially if he or she is a public figure. Additionally, because truth is an absolute defense to defamation, deepfakes raise some novel questions in defamation law. What if a deepfake shows President Donald Trump saying something that is literally true — or even something he actually wrote on Twitter — but the video is fake? Would a defamation claim be barred because the presentation is “substantially true”? Or could the president argue that despite the truth of the underlying statement, it falsely portrays the president as having spoken the statement on video? Of course, because it would be so difficult to prove any actual damages in some of these examples, it is unlikely that we will see many test cases by celebrities or leading politicians make their way through the courts.David Singer, Camila Connolly, “How Hollywood Can (and Can’t) Fight Back Against Deepfake Videos (Guest Column)” at Hollywood Reporter
Indeed. Anyone who has noticed supermarket checkout counter tabloids will be aware of claims that the British royal family engineered the murder of Princess Diana and that Hillary Clinton had adopted a space alien baby.
Stopping nonsense is impossible. But some people are trying to develop solutions that are fair to creators in more serious matters:
Congress’s actions are part of a broad trend across the country to regulate deepfakes, which many consider a menace to politics, privacy, business, and society’s shared conception of truth. Five states have already outlawed some deepfakes and about ten others are considering doing the same. Just last month, New York adopted a path-breaking law that establishes a postmortem property right for actors’ “digital replicas” and bars certain nonconsensual deepfakes.Matthew F. Ferraro, “Congress’s deepening interest in deepfakes” at The Hill December 29, 2020
Here’s an example of the complexity: The book trilogy, The Lord of the Rings, is protected by long established royalty rights.
But now: Gandalf, in the film version of The Lord of the Rings trilogy, was not a real person. But the person who played him is. If AI could just poach his talents, what would the rights, the remedy, be?
We will be hearing a lot more about this problem in the future and will need solutions that respect actual creators’ rights.
You may also enjoy: Are deep fakes too deep for us? Or can we fight back?