Deepfake technology makes the impossible, possible—well, at least visually possible. In this case, we’re talking about Richard Nixon and a speech of his that never actually occurred—a speech where he announces the death of all three Apollo 11 astronauts on the surface of the moon.
Although the speech is very real, this never actually happened. MIT Media Lab created the deepfake videos to illustrate just how dangerous the AI-edited content can be if shared online and without the context that the video footage is fake.
“Computer-based misinformation is a global challenge,” Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality, said in a press statement. “We are galvanized to make a broad impact on the literacy of the public, and we are committed to using AI not for misinformation, but for truth.”
The Nixon deepfake is part of an art exhibition installed on MIT campus in Boston. It’s called “In Event of Moon Disaster” and it uses a 1960s-era living room with a TV surrounded by three screens, including a vintage television set. The screens show edited images of Richard Nixon from actual speeches that he gave. On the center screen, he reads a contingency speech that was actually written for him by his speech writer, Bill Safire, “in event of moon disaster.” In the video, Nixon reads the speech from his desk in the Oval Office.
To create the deepfake, the MIT team used deep learning, a type of artificial intelligence, to edit the video footage and employed a voice actor to build the voice of Nixon. Alongside Canny AI, an Israeli startup, the researchers studied video dialogue replacement strategies to replicate the movement of Nixon’s lips while speaking, helping to match up his mouth to the fake speech. The final product is a truly believable video of Nixon telling the U.S. public that the moon landing mission had failed.
In 2014, Ian Goodfellow, a Ph.D. student who now works at Apple, invented the deepfake, which is based on generative adversarial networks, or GANs.
GANs help algorithms to move beyond the simple task of classifying data into the arena of creating data—in this case, images. This happens when two GANs try to fool one another into thinking that an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon.
“Our goal was to use the most advanced artificial intelligence techniques available today to create the most believable result possible—and then point to it and say, ‘This is fake; here’s how we did it; and here’s why we did it,'” said co-director Halsey Burgund, a fellow in the MIT Open Documentary Lab.
A web-based version of the deepfake, which premiered in Amsterdam last month, will go live in the spring.