AI Computer Vision & Graphics Machine Learning & Data Science Research

Building a Lie Detector for Images

Fake images and videos are giving AI a black eye — but how can the machine learning community fight back?

The Internet is full of fun fake images — from flying sharks and cows on cars to a dizzying variety of celebrity mashups. Hyperrealistic image and video fakes generated by convolutional neural networks (CNNs) however are no laughing matter — in fact they can be downright dangerous. Deepfake porn reared its ugly head in 2018, fake political speeches by world leaders have cast doubt on news sources, and during the recent Australian bushfires manipulated images mislead people regarding the location and size of fires. Fake images and videos are giving AI a black eye — but how can the machine learning community fight back?

Screen Shot 2020-01-14 at 15.20.40.png

A new paper from UC Berkeley and Adobe researchers declares war on fake images. Leveraging a custom dataset and fresh evaluation metric, the research team introduces a general image forensics approach that achieves high average precision in the detection of CNN-generated imagery

Spotting such generated images may seem to be a relatively simple task — just train a classifier using fake images versus real images. In fact, the challenge is far more complicated for a number of reasons. Fake images would likely be generated from different datasets, which would incorporate different dataset biases. Fake features are more difficult to detect when the training dataset of the model differs from the dataset used to generate the fake image. Also, network architectures and loss functions can quickly evolve beyond the abilities of a fake image detection model. Finally, images may be pre-processed or post-processed, which increases the difficulty in identifying common features across a set of fake images.

To address these and other issues, the researchers built a dataset of CNN-based generation models spanning a variety of architectures, datasets and loss functions. Real images were then pre-processed and an equal number of fake images generated from each model — from GANs to deepfakes. Due to its high variety, the resulting dataset minimizes biases from either training datasets or model architectures.

Screen Shot 2020-01-14 at 15.04.50.png

The fake image detection model was built on ProGAN, an unconditional GAN model for random image generation with simple CNN based structure, and trained on the new dataset. Evaluated on various CNN image generating methods, the model’s average precision was significantly higher than the control groups.

Screen Shot 2020-01-14 at 14.25.37.png

Data augmentation is another approach the researchers used to improve detection of fake images that had been post-processed after generation. The training images (fake/real) underwent several additional augmentation variants, from Gaussian blur to JPEG compression. Researchers found that including data augmentation in the training set significantly increased model robustness, especially when dealing with post-processed images.

Screen Shot 2020-01-14 at 14.51.17.png
Researchers find the “fingerprint” of CNN-generated images.

The researchers note however that even the best detector will still have trade-offs between true detection and false-positive rates, and it is very likely a malicious user could simply handpick a simple fake image that passes the detection threshold. Another concern is that the post-processing effects added to fake images may increase detection difficulty, since the fake image fingerprints might be distorted during the post-processing. There are also many fake images that were not generated but rather photoshopped, and the detector won’t work on images produced through such shallow methods.

The new study does a fine job of identifying the fingerprint of images doctored with various CNN-based image synthesis methods. The researchers however caution that this is one battle — the war on fake images has only just begun.

The paper CNN-Generated Images Are Surprisingly Easy to Spot…For Now is on arXiv.


Author: Linyang Yu | Editor: Michael Sarazen

About Synced

Machine Intelligence | Technology & Industry | Information & Analysis

0 comments on “Building a Lie Detector for Images

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: