It’s known that deep neural network models — mostly discriminative models — are prone to making poor predictions with high confidence when they encounter data distribution scenarios that vary too much from their training data. Generative models meanwhile use unsupervised learning to discover how data is generated based on its distribution, and as such are believed to have an advantage when handling data that includes novel variances.
AI researchers have widely applied deep generative models such as Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN) to downstream tasks such as anomaly detection, information regularization, open set recognition, and so on.
However a new DeepMind paper, Do Deep Generative Models Know What They Don’t Know, presents research which suggests generative models might not be as robust as we’ve come to believe:
“We investigate if modern deep generative models can be used for anomaly detection, as suggested by Bishop and the AABI panel, expecting a well-calibrated model to assign higher density to the training data than to some other data set. However, we find this to not be the case: when trained on CIFAR-10, VAEs, autoregressive models, and flow-based generative models all assign a higher density to SVHN than to the training data. We find this observation to be quite problematic and unintuitive since SVHN’s digit images are so visually distinct from the dogs, horses, trucks, boats, etc. found in CIFAR-10.”
DeepMind is not the first institute to question deep generative models’ stability or credibility in tasks such anomaly detection. A team of researchers from institutes in Prague, Czech Republic, compared selected deep generative models and classical anomaly detection methods and concluded that the performance of the generative models is determined by the process of selection of their hyperparameters.
The DeepMind paper concludes that researchers should be aware of deep generative models’ vulnerabilities and that they will require further improvements in this regard.
“In turn, we must then temper the enthusiasm with which we preach the benefits of generative models until their sensitivity to out-of-distribution inputs is better understood.”
The paper Do Deep Generative Models Know What They Don’t Know? is on arXiv.
Journalist: Tony Peng | Editor: Michael Sarazen
0 comments on “DeepMind Paper Challenges Generative Models’ Judgement”