A research team from MIT, Adobe Research, and Shanghai Jiao Tong University have introduced a novel method for reducing the cost and size of Conditional GAN generators.
Generative Adversarial Networks (GAN) excel at synthesizing photorealistic images. Conditional GANS, or cGANs, provide more controllable image synthesis and enable many computer vision and graphics applications, for example motion transfer of a dance video to a different person, creating VR facial animations for remote social interaction, etc.
The problem is, cGANs are notoriously computationally intensive, and this prevents them from being deployed on edge devices like mobile phones, tablets or VR headsets with insufficient hardware resources, memory or power.
GAN Compression, the general-purpose compression method the team presents in their paper, has proven effective across different supervision settings (paired and unpaired), model architectures, and learning methods (e.g. pix2pix, GauGAN, CycleGAN). Experiments have demonstrated that without losing image quality, the method reduces CycleGAN computation by more than 20 times and GauGAN computation by about 9 times.
Song Han, an MIT EECS assistant professor whose research focuses on efficient deep learning computing, led the team proposing the new compression framework for reducing inference time and model size of cGAN generators.
The researchers deployed their compressed pix2pix model on a mobile device (Jetson Nano). In a demonstration on the MIT HAN Lab YouTube channel the team compares their model with the original-sized pix2pix on an interactive edges2shoes application.
The researchers identify two factors that make compressing conditional generative models for interactive applications difficult: the unstable training dynamic of GANs by nature, and the large architectural differences between their recognition and the generative models.
To address these challenges the researchers first applied knowledge distillation to transfer knowledge from the intermediate representations of the original teacher generator to corresponding layers of its compressed student generator. They also noted that creating pseudo pairs using the teacher model’s output was helpful for unpaired training.
The team used neural architecture search (NAS) to automatically find an efficient network with significantly fewer computation costs and parameters, then decoupled the model training from architecture search by training a “once-for-all network” that contains all possible channel number configurations.
Researchers applied their framework to unpaired image-to-image translation model CycleGAN; Pix2pix, a conditional-GAN based paired image to-image translation model; and the SOTA paired image-to-image model GauGAN. It was able to compress successfully across model architectures, learning algorithms and supervision settings (paired or unpaired), while preserving image quality.
The authors say future work will include reducing the latency of models and finding efficient architectures for generative video models.
The paper GAN Compression: Efficient Architectures for Interactive Conditional GANs is on arXiv.
Journalist: Yuan Yuan | Editor: Michael Sarazen