On April 17th, researchers from Carnegie Mellon University and Petuum, a Pittsburgh-based CMU spinoff focused on artificial intelligence platforms jointly published On Unifying Deep Generative Models. The paper introduces a high-level theoretical connection between various deep generative models, particularly Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). It has been accepted as a 2018 ICLR Conference Paper.
The paper’s researchers suggest that GAN and VAE lack a unified statistical connection due to their distinct generative parameter learning paradigms. Researchers derived a new GAN formula that has many similarities with VAE, which could spark innovations in R&D of GANs and VAEs, and help researchers discover new common rules of machine intelligence that were previously undetected.
Both VAEs and GANs involve minimizing KL divergences of respective posterior and inference distributions, with the generative parameter θ in opposite directions.
It is straightforward to inspire new extensions to GANs and VAEs by borrowing ideas from each other. For example, the importance weighting technique originally developed for enhancing VAEs can naturally be ported to GANs and result in enhanced importance weighted GANs.
According to the original post, many advantages can be achieved by this unified statistical view:
Provide new insights into the different model behaviors. For example, it is widely observed that GANs tend to generate sharp, yet low-diversity images, while images by VAEs tend to be slightly more blurry. Formulating GANs and VAEs under a general framework would facilitate formal comparison between them and offer explanations of such empirical results.
Enable a more principled perspective of the broad landscape of generative modeling by subsuming the many variants and extensions into the unified framework and depicting a consistent roadmap of the advances in the field.
Enable the transfer of techniques across research lines in a principled way. For example, techniques originally developed for improving VAEs could be applied to GANs, and vice versa.
Author: Alex Chen | Editor: Tony Peng, Michael Sarazen