How scientists are fighting back against the coming deepfake sh*t-show

By Mark Sullivan

Deepfake audio and video distributed widely on social networks has the potential to cause untold chaos and violence. Imagine a deepfaked video of Joe Biden (or some other presidential candidate) announcing he’s ceding the race early on election night 2020, causing millions of would-be voters to leave polling places. (Already, the pace of production seems to be accelerating, with that creepy Mark Zuckerberg video and “drunk” Nancy Pelosi video going viral in recent weeks).

With bad actors rushing to master their deepfakery, computer science needs to fight back. Some of this work is happening within tech companies, while some of it is happening within universities. Researchers at the USC Information Sciences Institute (USC ISI) have just announced the creation of new software that uses artificial intelligence to quickly detect deepfake video with 96% accuracy.

The software stacks the frames of a suspect video on top of one another, and a neural network looks for inconsistencies in the movements of various parts of the subject’s face from one frame to the next. In a legitimate (unmanipulated) video, the eyelids, for example, would open and close in a smooth, natural-looking sequence, while the same action in a deepfake might look choppy and unnatural. The model also looks at other areas of the face, like the area around the mouth and the tip of the nose.

The neural network is trained using a common set of 1,000 manipulated videos, principal investigator Wael Abd-Almageed told me. The videos are products of three main methods of manipulation—deepfakes, face swaps, and Face 2 Face (a program that modifies video footage to make the subject appear to mimic the facial expressions of another person in real time). And because creators of deepfakes often use video compression to evade detection, Abd-Almageed said, the USC researchers fed their model both compressed and uncompressed versions of the videos.

Older methods of detection painstakingly look for clues by examining the video frame by frame, which requires a lot of time and compute power. The USC researchers’ method looks at the entire video all at once, and it’s compact and efficient enough to watch for deepfakes in real time across millions of accounts on a social network.

The USC researchers presented their research, “Recurrent Convolutional Strategies for Face Manipulation Detection in Videos,” at the IEEE conference on Computer Vision and Pattern Recognition in Long Beach, California, this week. The work was funded by the Defense Advanced Research Projects Agency (DARPA) MediFor program.

The USC researchers believe their detection method is “far ahead” of the methods the deepfake creators have at their disposal for avoiding detection. But, they said, content manipulators quickly modify their fakery methods as new detection methods arise. As in every other type of cyberthreat, an arms race may develop between the White Hats (security pros) and the Black Hats (manipulators).

“If you think deepfakes as they are now is a problem, think again,” Abd-Almageed said. “Deepfakes as they are now are just the tip of the iceberg, and manipulated video using artificial intelligence methods will become a major source of misinformation.”

 

Fast Company , Read Full Story

(8)