Defense Department project develops first tools to detect ‘deepfake videos’

A Defense Department project has developed some of the first tools able to detect when videos have been digitally manipulated—content often called deepfake videos.


A Defense Department project has developed some of the first tools able to automatically detect a particularly deceptive type of digital content called 'deepfake' videos.

Deepfake videos often feature one person’s face convincingly merged with another. Other videos show one person’s face making movements and speaking words they may have never made or said in real life, so that, combined with audio manipulation, the result can be the likeness of former President Barack Obama saying things actually uttered by someone else in a studio.

The technology uses machine learning processes to learn the details of a person’s face. The A.I. analyzes video footage of the target person to learn as much as it can; the more footage it has to study, the more it learns. That’s why presidents and celebrities are frequently used in deepfake experiments.

It’s a technological evolution that’s alarmed many in media and government, unsurprisingly. The fear is that it could usher in a new era of fake news, one in which it would be virtually impossible to tell whether what you see on a screen is real or fake.

“This is an effort to try to get ahead of something,” said Florida senator Marco Rubio in remarks at the Heritage Foundation. “The capability to do all of this is real. It exists now. The willingness exists now. All that is missing is the execution. And we are not ready for it, not as a people, not as a political branch, not as a media, not as a country.”

In 2014, deepfake technology started to get much better thanks to an innovative approach called generative adversarial networks (GAN). As Martin Giles writes for the MIT Technology Review, the approach is similar to an art forger and an art detective who repeatedly try to outwit one another, resulting in increasingly convincing fakes.

“Both networks are trained on the same data set. The first one, known as the generator, is charged with producing artificial outputs, such as photos or handwriting, that are as realistic as possible. The second, known as the discriminator, compares these with genuine images from the original data set and tries to determine which are real and which are fake. On the basis of those results, the generator adjusts its parameters for creating new images. And so it goes, until the discriminator can no longer tell what’s genuine and what’s bogus.”

Recently, a contest run by the U.S. Defense Advanced Research Projects Agency (DARPA) asked researchers to automate existing forensic tools in an effort to keep up with deepfake technology. The goal was to find areas where deepfake technology is falling short.

One forensics approach exploited a rather simple observation: faces created by deepfake technology hardly ever blink. The reason? Neural networks that create the deepfakes typically study still images of a target face, not blinking ones. It’s a good exploit for now, but deepfake generators could get around it by feeding the networks more images of blinking faces.

Other techniques, as Will Knight wrote for MIT Technology Review, take advantage of the peculiar signatures deepfake technology leaves behind, like unnatural head movements and odd eye color.

“We are working on exploiting these types of physiological signals that, for now at least, are difficult for deepfakes to mimic,” says Hany Farid, a leading digital forensics expert at Dartmouth University.

Still, it’s possible these kinds of forensic approaches will be forever one step behind the evolution of deepfake technology.

“Theoretically, if you gave a GAN all the techniques we know to detect it, it could pass all of those techniques,” David Gunning, the DARPA program manager in charge of the project, told MIT Technology Review. “We don’t know if there's a limit. It’s unclear.”

Stand up against religious discrimination – even if it’s not your religion

As religious diversity increases in the United States, we must learn to channel religious identity into interfaith cooperation.

Sponsored by Charles Koch Foundation
  • Religious diversity is the norm in American life, and that diversity is only increasing, says Eboo Patel.
  • Using the most painful moment of his life as a lesson, Eboo Patel explains why it's crucial to be positive and proactive about engaging religious identity towards interfaith cooperation.
  • The opinions expressed in this video do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.
Keep reading Show less

Moon landing astronauts reveal they possibly infected Earth with space germs

Two Apollo 11 astronauts question NASA's planetary safety procedures.

Credit: Bettmann, Getty Images.
Surprising Science
  • Buzz Aldrin and Michael Collins revealed that there were deficiencies in NASA's safety procedures following the Apollo 11 mission.
  • Moon landing astronauts were quarantined for 21 days.
  • Earth could be contaminated with lunar bacteria.
Keep reading Show less

NASA's idea for making food from thin air just became a reality — it could feed billions

Here's why you might eat greenhouse gases in the future.

Jordane Mathieu on Unsplash
Technology & Innovation
  • The company's protein powder, "Solein," is similar in form and taste to wheat flour.
  • Based on a concept developed by NASA, the product has wide potential as a carbon-neutral source of protein.
  • The man-made "meat" industry just got even more interesting.
Keep reading Show less

Where the evidence of fake news is really hiding

When it comes to sniffing out whether a source is credible or not, even journalists can sometimes take the wrong approach.

Sponsored by Charles Koch Foundation
  • We all think that we're competent consumers of news media, but the research shows that even journalists struggle with identifying fact from fiction.
  • When judging whether a piece of media is true or not, most of us focus too much on the source itself. Knowledge has a context, and it's important to look at that context when trying to validate a source.
  • The opinions expressed in this video do not necessarily reflect the views of the Charles Koch Foundation, which encourages the expression of diverse viewpoints within a culture of civil discourse and mutual respect.
Keep reading Show less