Remember that viral video of Nancy Pelosi that President Donald Trump reposted on Twitter last summer? The video, which was altered, showed the Democratic Speaker of the House slurring in what appears to be a drunken soliloquy. However, the video was altered with artificial intelligence. In other words, it was a deepfake.
While it may be too late for the folks who saw and shared this particular political deepfake, Facebook is trying to prevent the spread of this new-age form of misinformation throughout the rest of the 2020 presidential election cycle through a new internal policy on deepfakes.
In a blog post released Monday, Monika Bickert, vice president of global policy management for Facebook, wrote that the company is making new efforts to “remove misleading manipulated media.”
The social media giant’s new rules came just days before a marathon House Energy and Commerce hearing focused on manipulated media on Wednesday morning. Bickert—author of the Facebook deepfake blog post—was one of the witnesses that the committee interrogated. She reiterated again and again that the company wants to ensure people have control over their own data and what they see on the platform, but was a bit indirect in answering some specific questions.
It was apparent that the committee is looking toward legislation that would address deepfakes moving forward, given their potential to disrupt the political process, elections, journalism, and even individual freedom.
“Technology is outpacing technology and the people,” said Lisa Blunt Rochester, a U.S. Representative from Delaware, during the hearing.
In 2014, Ian Goodfellow, a Ph.D. student who now works at Apple, invented the deepfake, which is based on generative adversarial networks, or GANs.
GANs help algorithms move beyond the simple task of classifying data into the arena of creating data—in this case, images. This happens when two GANs try to fool one another into thinking that an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon saying something that is patently false and that he never even actually said. (Yes, this has already been made).
“Crucially, the system is able to initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” researchers wrote in a May 2019 paper that describes how simple it is to create a deepfake of a floating head from even just one image. “We show that such an approach is able to learn highly realistic and personalized talking head models of new people and even portrait paintings.”
In the Nixon example, a team at MIT used deep learning, a type of artificial intelligence, to edit the video footage and employed a voice actor to build the voice of Nixon. Alongside Canny AI, an Israeli startup, the researchers studied video dialogue replacement strategies to replicate the movement of Nixon’s lips while speaking, helping to match up his mouth to the fake speech. The final product is a truly believable video of Nixon telling the U.S. public that the moon landing mission had failed.
Clearly, videos like this and the Pelosi deepfake are not only a threat to credible news, but also national security and, yes, even elections.
Facebook is taking a specific, two-pronged approach to flagging and removing deepfakes. For an image to be taken down, it must meet the following criteria, according to the blog post:
Satire and parody videos are still safe, though, as are videos that have been edited only to omit or change the order of words.
The thing about the Pelosi deepfake is that it doesn’t appear Facebook’s new ban would actually cover that video at all. After the video went viral last summer, it was widely viewed on Facebook. According to The Verge, Facebook said that the video did not violate the company’s policies at that time. With the new rules, it still looks like the Pelosi deepfake could slip through the cracks and remain posted on the site. That’s because that specific video wasn’t created with artificial intelligence at all—it was most likely edited using readily available software that could slow down Pelosi’s speech to a drunken slur.
Still, Facebook says videos that don’t meet its deepfake removal standards can be reviewed by one of its independent third-party fact-checkers, including 50 global partners that operate in over 40 different languages.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad,” the blog post says. “And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
Facebook says if all manipulated videos were simply removed, they would still be available “elsewhere on the internet or social media ecosystem,” and that leaves context out. Instead, the company contends, it’s better to label such deepfakes as false. In the case of the Pelosi video, this is likely what would be done if it came out today.
Dr. Joan Donovan, research director of the Technology and Social Change Project at Harvard’s Shorenstein Center on Media, Politics, and Public Policy, said during the House committee hearing that she believes the use of decentralized technology, like blockchain, could be helpful in identifying users who have created the original deepfake videos that end up appearing on social media.
Bickert, meanwhile, said that Facebook already does “verify the identity of those advertisers” that post ads to the social media site, seeming to imply that there is no need for further intervention. Facebook also keeps a full library of those ads and who is responsible for them, Bickert added.
Congressman Jerry McNerney, who represents California’s 9th district, asked Bickert if Facebook’s fact-checking process happens quickly enough to prevent viral videos from spreading, to which she replied that user reporting of deepfakes allows the removal process to happen quickly. However, she skirted McNerney’s question when he asked if Facebook would be willing to submit to a third party audit of its practices in spreading or preventing misinformation by June 1 of this year.
“We think transparency is important,” Bickert replied, adding that the company would be happy to follow up with the committee on specific concerns, but did not address the possibility of an audit.
Meanwhile, other members of the committee are working on draft legislation that could force Facebook to rethink the way it handles deepfakes and other forms of misinformation.
Representative Yvette Diane Clarke from New York’s 9th congressional district has proposed a new deepfake bill that would force content creators to label their content as such if they want to post it anywhere online.
Donovan said this could help content platforms keep the spread of misinformation from going viral. She calls this “proactive” content moderation.