Skip to content
Technology & Innovation

These A.I. tools could lead to the next generation of fake news

At the very least such fake news could further divide us. At worst, a violent incident occurs, perhaps even on a large-scale.
Credit: Getty Images.
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

We have the phrase “Seeing is believing.” It makes sense. The rash of “fake news” that infested the 2016 presidential election, surrounded mostly false articles. Fabricated news stories have existed since the birth of the printing press. But what if, with the help of A.I., spin doctors and hackers could make fictitious videos in a way that’s so visceral and realistic that skepticism is much harder to achieve?


A.I. can now change a horse into a zebra, capture the audio of a person’s voice and use it to make them say whatever a programmer wants, and more. Soon, whole videos can be conjured out of thin air, as if they really happened. The time is coming and little is being done to stop it. The next wave of misinformation, propaganda, and misdirection schemes will have platforms with capabilities untold of in human history. All of it will come through our social media echo chamber, as clickable and sharable as any reality-based content.

Left unchecked, this has the potential to at the very least, divide and factionalize the US and other countries even further, making for a more disharmonious world. At worst, incidents of violence could occur, even on the large-scale. It’s happened in the past. Consider that inflammatory radio broadcasts have helped ramp up the tensions behind many of history’s worst genocides, such as in Rwanda. And just last year, a man with a rifle marched into a D.C. pizzeria, to uncover what he thought was a Clinton-backed, child sex ring.

Humans are visual creatures. Over 90% of the data processed in the brain is visual, and a wide swath of the population are visual learners. As such, this onrush in A.I.-manipulated media, has the potential to motivate humans to a degree never seen before.

Pretty soon, A.I. will create seamless visual experiences. We won’t be able to tell what’s real and what’s fabricated. Credit: Getty Images.

It’s already happening in porn. Wonder Woman star Gal Gadot’s face was recently pasted onto a porn actress’s body. Although it’s a pretty flimsy job and easy to see through, “deepfakes,” as one Reddit user called them, are getting more brazen and their works, more sophisticated, with the help of machine learning algorithms and open source code. How do they do it? The algorithms take existing content and reshape it into new material. Not only are people pulling more shenanigans, the quality is improving all the time.

This year, researchers at UC Berkeley developed a unique method, enhancing what’s known as image-to-image translation. In a video, they turned an ordinary horse into a zebra. Again, not a perfect execution but a significant step forward. It won’t be long before the rough spots are smoothed out, and fabrications appear authentic. So the video aspect is almost there, but what about audio? Lyrebird is a ground-breaking startup that can make someone deliver a believable speech after sampling just one minute of their voice.

Along these same lines, Adobe has been working on a series of new A.I. technologies, which taken together are known as Sensei. One of them is a video editing tool called Adobe Cloak. Here, anything can be edited in or out of a video. Don’t like a mailbox in your scene, simply wipe it away. Need palm trees in the background? No problem.

Another tool, known as Project Poppetron, allows for someone to take a photo of a person, give them one of any number of stylized faces, and create an animated clip using the type they’ve chosen. These feats are possible because machine learning can now distinguish the parts of the face and the difference between background and foreground, better than previous models.

Adobe’s Sensei A.I. media toolkit is hoping to revolutionize how media is created. Credit: Adobe.

Just like any technology, there are positive and negative aspects. Such cutting-edge audio, image, and video editing tools could allow amateur artists to bring their craft up to the next level, or help experts become masters, perhaps even creating a kaleidoscope of subgenres that advance the arts in totally new and unexpected ways. As for voice fabrication technology, Lyrebird believes it could be used to restore the voices of those who have lost them to disease. But of course, there’s the downside, the ability to pump out a whole new generation of fake news.

Safeguards will have to be put in place to protect the public from dubious content. Facebook and other social media sites are just beginning to take steps in that direction. This could easily set up a new sort of arms race paradigm, where fake news purveyors find tricks to get past “trust indicators,” while social media sites fight desperately to keep uncovering violators and their new, nefarious methods.

Last October, Adobe gave a taste of what their new A.I. software can do. See for yourself here:

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related