How to identify and respond to “Deepfake” videos — realistic AI-synthesized video generated for the purpose of spreading misinformation — is a challenge that has been highlighted by recent social media stumbles on the question, particularly from Facebook. Several months ago Facebook was criticized for failing to remove a viral video manipulated to make US House Speaker Nancy Pelosi sound drunk.
In collaboration with Partnership on AI, Microsoft, and academics from top universities, Facebook today announced the Deepfake Detection Challenge (DFDC) with the aim of finding innovative deepfake detection solutions to help the media industry spot videos that have been morphed by AI models.
The challenge includes a dataset of video pairs (originals filmed by paid actors and tampered versions generated by various AI techniques). Facebook says no actual Facebook user data will be used, and has pledged US$10 million to encourage global participation in the challenge.
Facebook will use next month’s International Conference on Computer Vision (ICCV) as part of the project’s development process, and plans to release the dataset and launch the challenge at the Conference on Neural Information Processing Systems (NeurIPS) — the world’s biggest AI research conference — this December in Vancouver. The submission deadline is March 2020.
Deepfake technology came to the public’s attention in 2017 when a Reddit user employed face-swapping technology enabled by generative adversarial networks (GAN) to create a series of fake celebrity porn videos. The term has since been expanded to include any video or audio that has been manipulated by machine learning models with the aim of misrepresenting information.
The creation of easy-to-use voice cloning and face-swapping AI tools have increased deepfake production and dissemination and sparked widespread concerns over information authenticity. This March, cyber criminals used AI-powered software to imitate a CEO’s voice and scam US$243,000 from a UK-based energy company.
“Deepfakes may disperse rapidly in social networks with spreading dynamics similar to disease,” warn IEEE Senior Member Sakshi Agarwal and Lav R. Varshney in their paper Limits of Deepfake Detection: A Robust Estimation Viewpoint.
Current automatic deepfake detection research areas include a University at Albany-SUNY deep learning based method that proposes distinguishing AI-generated videos from real ones without using deepfake generated images as reference. A paper from UC Berkeley and USC meanwhile introduces a forensic technique that models facial expressions and movements that typify an individual’s speaking pattern to detect deepfakes. Their paper Protecting World Leaders Against Deep Fakes was featured in a CVPR 2019 workshop.
While deepfake videos are still limited in number, global law enforcement and governments are already taking actions to punish those who synthesize videos for malevolent reasons. In June a US Congresswoman tabled a bill to criminalize such synthetic media, TechCrunch reported. In July, the state of Virginia added realistic fake videos and photos to its nonconsensual pornography ban.
More information and future updates can be found on the Deepfake Detection Challenge website.
Journalist: Tony Peng | Editor: Michael Sarazen