AI Technology

Spotting Generated Images Just Got Much Easier

UC Berkeley and Adobe Research have introduced a “universal” detector that can distinguish real images from generated images regardless of what architectures and/or datasets were used for training.

Today’s AI-driven image-generation techniques based on GANs (generative adversarial networks) and other methods can produce incredibly realistic images which are notoriously difficult to distinguish from real images. Such image generation technology can pose a serious threat to society, as fake images have the potential to affect how people view the world and can sway public opinion and even elections. In the wrong hands, image generation can also be used as a tool of manipulation or harassment, and can also interfere with AI security and safety systems.

Identifying AI-generated images however is technically demanding. Although detecting an image that has been generated by a specific technique can be relatively straightforward, such approaches thus lack generalization capability due to the fact that image generation methods are highly varied, with different training datasets, network structures, loss functions and even image preprocessing methods. It remains a challenge for a classifier trained to detect one approach to be effective on images generated by other models.

To address this, UC Berkeley and Adobe Research have introduced a “universal” detector that can distinguish real images from generated images regardless of what architectures and/or datasets were used for training.

The researchers collected 11 different CNN-based image generator models — mostly GANs — which are by far the most common design for generative CNNs. They started by using ProGAN to train a universal classifier and tested it against the other CNN-based image generators.

image.png
A classifier trained to detect images generated by ProGAN (far left) and other models
image.png
Test results for forensic classifiers on a variety of CNN-based image generation methods
image.png
Effect of augmentation methods

The test results show that forensics models trained on on CNN-generated images can exhibit a generalization to other random CNN models. The researchers also discovered that data augmentation is critical for generation as well as for the diversity of training images. Taking BigGAN as an example — with augmentations the average precision (AP) score significantly improves from 72.2 to 88.2. Performance for other models (CycleGAN,GauGAN) is similarly improved, 84.0 to 96.8 and 67.0 to 98.1 respectively. In general, training with augmentation helps boost performance.

image.png
Effect of dataset diversity

The researchers also found that AP improves when the number of classes used increases from 2 to 16. But this is only true up to a point, as there is minimal improvement when increasing the number of classes from 16 to 20.

The paper CNN-Generated Images are Surprisingly Easy to Spot… For Now is on arXiv.


Author: Hecate He | Editor: Michael Sarazen

0 comments on “Spotting Generated Images Just Got Much Easier

Leave a Reply

Your email address will not be published.

%d bloggers like this: