Thanks to the creation of AutoML — which is essentially automated neural architecture search (NAS) — AI can now design better deep neural networks than human researchers for computer vision tasks such as image classification and object detection. AutoML’s tremendous success has prompted AI researchers to explore its efficacy in additional areas, such as generative adversarial networks (GANs).
Researchers from Texas A&M University and MIT-IBM Watson AI Lab recently presented a paper that applies NAS to GANs. Their “AutoGAN” is an architecture search scheme specifically tailored for GANs that outperforms current state-of-the-art hand-crafted GANs on the task of unconditional image generation. The associated paper has been accepted by ICCV 2019.
Although NAS-developed architectures can outperform hand-crafted architectures in image classification/segmentation, researchers faced challenges enabling NAS to automatically design models for training GANs — a difficult task which can lead to system collapse even when human-designed architectures are used. Another challenge was how to find an appropriate metric to evaluate and guide the search process.
The breakthroughs researchers achieved in their preliminary experiments are as follows:
- Defined the search space and guided the architecture search with a recurrent neural network controller;

- Used Inception score (IS) as the reward, as Frchet Inception Distance (FID) produced comparable results but was more time-consuming;

- Applied multi-level architecture search (MLAS) to the model instead of single-level architecture search (SLAS) since MLAS outperforms SLAS and requires less training time.

In the researchers’ experiments AutoGAN outperformed state-of-the-art human-designed GANs with an 8.55 IS and a 12.42 FID score on the CIFAR-10 dataset. Although this preliminary study was a success, researchers also identified a number of AutoGAN limitations and possible future research directions:
- The current search space is limited, and should be expanded;
- AutoGAN has not been tested on higher-resolution image synthesis, which would require improved search algorithm efficiency;
- A better discriminator has not yet been discovered;
- In the future, AutoGAN should incorporate labels like conditional and semi-supervised GANs.
The paper AutoGAN: Neural Architecture Search for Generative Adversarial Networks is on arXiv, and code is available on GitHub.
Author: Reina Qi Wan | Editor: Michael Sarazen
0 comments on “AutoML + GAN = AutoGAN! AI Can Now Design Better GAN Models Than Humans”