life in pixels

Can You Spot a Deepfake? Does It Matter?

Using a clip from a recent appearance on Conan, a YouTuber “deepfaked” Arnold Schwarzenegger’s head onto comedian Bill Hader’s body. Photo: YouTube

A shadow looms over the 2020 election: Deepfakes! The newish video-editing technology (or really, host of technologies) used to seamlessly paste one person’s face on another’s body, has activated a panic among pundits and politicians. During an appearance on CBS This Morning this week, Instagram CEO Adam Mosseri summed up the general attitude toward deepfakes, which his platform currently doesn’t have a policy against: “I don’t feel good about it.” Earlier this month, deepfaked and manipulated videos of Mosseri’s boss Mark Zuckerberg and Nancy Pelosi were each the subject of breathless mainstream media coverage; last week, Congress held hearings on deepfakes. The media, a Politico headline claims, is “gearing up for an onslaught of fake video.” An onslaught! I don’t feel good about it!

Into this fray steps the the Washington Post’s Glenn Kessler, “Fact Checker” columnist, who’s published a “guide to manipulated video” with Nadine Ajaka and Elyse Samuels. The result is a beautifully designed taxonomy of what I think of as the deepfakes extended cinematic universe. The writers divide “manipulated video” into three categories — “missing context,” “deceptive editing,” and “malicious transformation” — and then subdivide each of those three categories into two subcategories, creating in the process a spectrum of video misinformation from “misrepresentation” (unedited but misleadingly presented videos) to outright “fabrication” (deepfakes, baby). “This guide,” they write, “is intended to help all of us navigate this new information landscape and start a necessary conversation.”

What struck me most, though, seeing all the possibilities of misleading video presented side by side, is that “deepfakes” don’t seem particularly threatening. Of the three examples of actual prominent deepfakes provided, two are basically, anti-deepfake PSAs — videos created with the express purpose of educating people about the misinformation potential contained in deepfakes. In other words, the best examples of widespread deepfaked videos are videos in which Mark Zuckerberg and Barack Obama were deepfaked to warn people not to fall for deepfaked videos. That seems, well, like a good thing. (The third of the three examples is a video created with the express purpose of putting Nic Cage’s face on Donald Trump’s body, which is misinformation of a kind, I suppose, if you’d never seen Donald Trump or Nicolas Cage before.)

In fact, much more frightening than the example deepfakes in the guide — more frightening than any of the example videos that used computers to edit or manipulate videos — were the clips on the opposite end of the spectrum: “unaltered video” presented “in an inaccurate manner” so as to “misrepresent the footage and mislead the viewer.” What makes these unedited and unmanipulated videos “frightening” to me is that they’re being shared by prominent political figures under incredibly dishonest premises. Who needs deepfakes when you have a congressman like Matt Gaetz willing to share video of a crowd in Guatemala and suggest that it shows a crowd of Hondurans being paid by George Soros to migrate into the U.S.?

Put another way, by placing all of these misleading or manipulated videos in a row, the Post helps demonstrate that the threat of misinformation in videos, such as it exists, isn’t a function of new technology, but of social context. Most people determine the authority or veracity of a given video clip not because it’s particularly convincing on a visual level — we’ve all seen mind-bogglingly good special effects — but because it’s been lent credibility by other, trusted people and institutions. Who shared the video? What claims did they make about it? Deepfakes have a viscerally uncanny quality that make them good fodder for panic and fearmongering. But you don’t need deepfake tech to mislead people with video.

Beyond this lies a deeper question: to what extent are people actually being “misled” by videos like the examples in the guide? That the video of Nancy Pelosi, manipulated to make her appear drunk, was widely shared on the right-wing internet doesn’t necessarily mean that it was widely believed to be true, in some empirical sense. I tend to agree with the technology writer Rob Horning, who argues that many manipulated and misrepresented videos are enjoyed and shared “less for factual information than emotional gratification.” There may be sophisticated actors who create manipulated videos for specific and highly targeted goals, but your average right-wing video edit exists “not to try to trick people but to entertain them with their very fakeness,” to help people pierce through what they believe to be an overly deferential consensus “reality” to expose some kind of deeper truth — in the case of the Pelosi video, say, the “truth” being that the Speaker of the House is a fraud, or incompetent, or should be removed from office.

But that may be delving too deeply into psychological terrain. We don’t have to psychoanalyze people who share faked videos to see their most obvious effect on politics.

Early in the morning of June 11, a number of Malaysian journalists and politicians were anonymously invited into two WhatsApp groups, where a video of two men having sex had been shared. One of the two men in the clip, accompanying documents implied, was the Economics Affairs Minister of Malaysia, Mohamed Azmin Ali. The WhatsApp video was fairly low quality, but an accompanying was “confession” posted to Facebook a few hours later by a 27-year-old Cabinet aide named Muhammad Haziq Abdul Aziz, who identified Azmin, and claimed to be the other man in the clip. What more proof would anyone need? Malaysia is a relatively socially conservative, democratic country with a high rate of smartphone penetration, and the clips quickly went viral across WhatsApp.

On the other hand … can you trust everything you see? Almost immediately, just as police launched an investigation and rivals called for Azmin to resign, his supporters began loudly crying that the minister had been victimized by deepfakes. Haziq, one Azmin ally insisted, is too out of shape to be the fit man you see in the Facebook confession: “He has not been working out at the gym in a while, and his body isn’t as built as in the video.” The investigation continues, and there is still pressure on Azmin, but the possibility that either or both of the videos were deepfaked seems to have saved the minister’s job. “Nowadays you can produce all kinds of pictures if you are clever enough,” Azmin’s boss, Prime Minister Mahathir Mohamad, said. “One day you may also see my picture like that. It would be very funny.”

Were either of the videos “deepfakes,” or even just regular old staged fakes? Probably not — but the difficulty of ascertaining, clearly, one way or another, the veracity of the videos, is the point. Deepfakes aren’t a cause of misinformation, so much as a kind of symptom — a technology that’s only really relevant to us because we already live in a world that’s having trouble settling on a consensus account of reality, and whose greatest use isn’t creating fakes but undermining our ability to ascertain what’s true. If you want a vision of the future, don’t imagine an onslaught of fake video. Imagine an onslaught of commenters calling every video fake. Imagine a politician saying “he has not been working out at the gym in a while, and his body isn’t as built as in the video,” forever.

Can You Spot a Deepfake? Does It Matter?