Deep Fakes: Fake Video and Audio Indistinguishable From Reality Are Here, and Here to Stay

Fake news is about to get a whole lot fakier.

Getty Images/iStockphoto

Have you ever heard of the website called This Person Does Not Exist?

Until yesterday, I hadn’t either.

Each time you visit the site, you’re presented with a different human face. Here’s an example:

Source: https://thispersondoesnotexist.com

The catch? The image was created by artificial intelligence.

Can you tell it’s not a real person?

In the past, we’ve seen Hollywood try to do this with varying degrees of success. Most recently, Martin Scorcese’s The Irishman (2019) made headlines for de-aging its stars, Robert De Niro, Al Pacino, and Joe Pesci. Unfortunately, even with a budget of USD 159 million, they didn’t quite pull it off.

Fast forward a couple of months to December 2019 and someone had recreated the de-aging effects using free software in just 7 days. Incredibly, the effects look even better than the original. Don’t believe me? Watch the side-by-side comparison on YouTube.

Similar software was used again in August 2020 to produce this hilarious video of five U.S. presidents rapping N.W.A.’s Fuck the Police. According to the video description, it was created by an 18-year-old that had only been using the software for a week. And, to top it off, the video was purposefully made to appear less real than it could have been.

Finally, there is this very disturbing GIF in which actress Amy Adams (left) is modified to have the face of actor Nicolas Cage (right). Amy, you deserve better!

Anyway, this is pretty cool, right? The possibilities for using this technology in movies, video games, and other creative endeavors are endless!

But, like any technology, there’s a dark side. What does the world look like when fake videos can be created of anyone saying or doing anything?

The impact of one fake video

Imagine, you’ve just sat down to watch the evening news when, suddenly, the newscaster is interrupted by a breaking story. What you see next is a video of a candidate in your city’s mayoral election making overtly racist remarks. The video and audio are perfectly clear. You can’t believe what you just heard.

You watch as an industry expert suggests that the video could be a fake.

The newscaster is skeptical — You can’t deny how real this looks.

Now, what do you do? You might be aware that software exists to manipulate videos in this manner, but how does that help you decide whether this video is real or not? How do you make a decision?

As time goes on, more and more videos will be released into the world just like this one. And they’ll just keep getting better. If you can tell it’s not real today, just wait — tomorrow will be different. And then what?

Can’t we simply detect fake videos?

You might be thinking, there must be a way to detect fake videos!

There are two things to say about this.

First, yes, we will build software to detect fake videos. But, this detection software is going to drive synthetic media software to become ever more sophisticated. This will become an arms race — who can build better software, faster? And, ultimately, this arms race could drive synthetic media to become indistinguishable from real media.

The second problem is much more immediate. In some ways, it doesn’t matter whether we can detect fake videos. If you watch a video and it changes your mind, the damage is done. We know from psychology that once we’ve been convinced of something it’s extremely difficult to change our minds, even in the face of overwhelming evidence. So, if we watch a video of a political candidate making racist remarks, we’re probably going to continue to think of that candidate as a racist even if we later learn the video was fake.

Besides, as of May 2019, 500 hours of content were being uploaded to YouTube per minute. Even if there was software to detect fake videos, how could it ever keep up? The practical answer is, it couldn’t.

This means that in the coming months and years we are going to increasingly see videos that aren’t real. It’s only a matter of time before one of these videos causes a scandal of epic proportions.

What happens next?

If you’ve watched the documentary The Social Dilemma, you know that our relationship to information is pretty fucked up. And we largely have social media to thank for that.

For a fantastic summary of that problem, I’d highly recommend listening to Sam Harris’ discussion with Tristan Harris, an expert on the matter. Or, you could read this post I wrote:

Our relationship with social media is already tearing us apart. In the United States, we are seeing the fabric of society fraying at the edges. The world doesn’t need any more problems. And now we’ve got a big one.

It feels like synthetic media will be the straw that broke the camel’s back, except it’s not a straw, it’s a tank and it’s loaded and it’s pointing directly at our faces.

If some kid in his parent’s basement can create convincing videos of people saying and doing anything, how could we possibly discern real from fake? Are you going to rely on some guy you follow on YouTube to tell the difference? Are you going to rely on CNN or Fox News?

“There is a deep moral and ethical imperative to get the framework for the use of synthetic media correct,” says Nina Shick, author and broadcaster specializing in the impacts of synthetic media on society, “because if we don’t get this right, and five or six years down the line when synthetic media is ubiquitous and our politics have become even more partisan and polarised, it’s going to be too late.”