We know there’s a lot of false information to be found on the internet, but sometimes, it’s legitimately difficult to tell if something really is false. Enter the deepfake — computer-synthesized video or audio recordings that look and sound indistinguishable from the real thing.
Media experts have already sounded the alarm, afraid that these deepfakes may usher in a new era of disinformation and further chip away at our ability to trust online sources.
Others, however, think that deepfakes are actually good, and that they will expose the ways in which we’ve come to trust images and audio too much.
The Danger That Deepfakes Pose
In November, documentary filmmaker Halsey Burgund published a film clip on YouTube. The clip, which features Richard Nixon announcing the failure of the Apollo 11 project, was created with a database of archival footage and the use of artificial intelligence.
The audio is real, but the video is a complete fabrication. It’s also totally indistinguishable from real video.
It’s one of the most recent examples of a deepfake, a new application of AI that has media experts worried.
Media experts are worried that deepfakes will further erode American’s trust in news at a time when the majority of Americans already mistrust mass media, and phrases like “fake news” are common refrains on social media. Deepfakes could be used to stir unrest, influence elections or harm the reputation of a major corporation.
For example, one deepfake from artist Bill Posters featured Facebook founder Mark Zuckerberg claiming to own the lives of the platform’s users. The deepfake was designed to challenge Facebook’s moderation of deepfake videos, which some had considered weak or ineffective.
The number of deepfakes on the internet is already in the five figures, and the pace at which they’re being created is increasing. Between December 2018 and September 2019, the total estimated number of deepfakes on the web increased by 100 percent. It’s likely that as deepfakes become more common over the next few years, they’ll become an even bigger threat.
These deepfakes can sometimes be fact-checked, but deepfake detection techniques continue to lag behind the most advanced deepfake creation methods. Some experts also fear that if Americans don’t trust the news, they won’t trust fact-checkers, either.
Deepfakes could also be just the tip of the iceberg. Artificial intelligence is a powerful tool, and it could also be used to recreate writing styles or signatures to create more and more specific and believable fakes.
The conspiracies spread by these deepfakes could effectively make national conversations more difficult or impossible as it becomes harder for the public to agree on even the basic facts of major topics.
The Silver Lining of Deepfakes
Other media experts see a silver lining to the issues that deepfakes pose. They argue that deepfakes may, in fact, be good for media literacy. If the public is aware of the danger presented by deepfakes, they’ll no longer look at photo and video — which are already capable of being manipulated right now — and assume that these represent the truth.
In a sense, there have always been “deepfakes” — photo-manipulation techniques, both digital and analog, or clever splicing of audio clips can create fake articles that look or sound, to an amateur, just like the real thing.
Even recently, “shallow” fakes created with older technology have been just as effective as modern deepfakes at spreading conspiracy. Rather than being a problem, though, they’re exposing it — how we’ve come to treat audio, video and photo as automatically truthful.
It’s a bit of a cynical argument, but it’s one that has gained some traction — and it may be seen more often if experts find themselves unable to counter the spread of deepfakes.
What Deepfakes Will Mean for Media
If current trends hold, deepfakes will only become more common in the future. With detection techniques only partially equipped to find deepfakes — and Americans mistrust of the media and fact-checking — it’s unlikely that deepfakes will be defeated with better technology.
Some argue that deepfakes are dangerous because they will create irreversible damage to the relationship that Americans have with mass media and further reduce their trust in the news. Others argue that deepfakes are good because they will discourage us from treating photos as if they always represent the truth.
It’s not clear right now whether the long-term impacts of deepfakes will be positive or negative. However, these fakes are almost certainly going to lead to a greater distrust of media in the short-term.
Top image: BuzzfeedVideo on YouTube