AI Conference Machine Learning & Data Science Research

ICML 2020 Announces Outstanding Paper Awards

The acceptance rate of 21.8 percent is slightly lower than 2019’s 22.6 percent (774 accepted papers from 3,424 submissions).

Organizers of the 37th International Conference on Machine Learning (ICML) have announced their Outstanding Paper awards, recognizing papers from the current conference that are “strong representatives of solid theoretical and empirical work in our field.”

A total of 1,088 papers out of 4,990 submissions made it to the prestigious machine learning conference. The acceptance rate of 21.8 percent is slightly lower than 2019’s 22.6 percent (774 accepted papers from 3,424 submissions), and it seems likely the drastic increase in submissions helped contribute to this.

Outstanding Paper Awards:

On Learning Sets of Symmetric Elements

image.png

Authors: Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

Institutions: NVIDIA Research, Stanford University, Bar Ilan University

Abstract: Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction. In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric elements layers (DSS), are universal approximators of both invariant and equivariant functions. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs and point clouds.

Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems

image.png

Authors: Kaixuan Wei, Angelica Aviles-Rivero, Jingwei Liang, Ying Fu, Carola-Bibiane Schönlieb, Hua Huang

Institutions: Beijing Institute of Technology, University of Cambridge

Abstract: Plug-and-play (PnP) is a non-convex framework that combines ADMM or other proximal algorithms with advanced denoiser priors. Recently, PnP has achieved great empirical success, especially with the integration of deep learning-based denoisers. However, a key problem of PnP based approaches is that they require manual parameter tweaking. It is necessary to obtain high-quality results across the high discrepancy in terms of imaging conditions and varying scene content. In this work, we present a tuning-free PnP proximal algorithm, which can automatically determine the internal parameters including the penalty parameter, the denoising strength and the terminal time. A key part of our approach is to develop a policy network for automatic search of parameters, which can be effectively learned via mixed model free and model-based deep reinforcement learning. We demonstrate, through numerical and visual experiments, that the learned policy can customize different parameters for different states, and often more efficient and effective than existing handcrafted criteria. Moreover, we discuss the practical considerations of the plugged denoisers, which together with our learned policy yield state-of-the-art results. This is prevalent on both linear and nonlinear exemplary inverse imaging problems, and in particular, we show promising results on Compressed Sensing MRI and phase retrieval.

Outstanding Paper (Honorable Mentions):

Efficiently sampling functions from Gaussian process posteriors

image.png

Authors: James Wilson, Slava Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth

Institutions: Imperial College London, St. Petersburg State University, St. Petersburg Department of Steklov Mathematical Institute of Russian Academy of Sciences, University College London

Abstract: Gaussian processes are the gold standard for many real-world modeling problems, especially in cases where a model’s success hinges upon its ability to faithfully represent predictive uncertainty. These problems typically exist as parts of larger frameworks, wherein quantities of interest are ultimately defined by integrating over posterior distributions. These quantities are frequently intractable, motivating the use of Monte Carlo methods. Despite substantial progress in scaling up Gaussian processes to large training sets, methods for accurately generating draws from their posterior distributions still scale cubically in the number of test locations. We identify a decomposition of Gaussian processes that naturally lends itself to scalable sampling by separating out the prior from the data. Building off of this factorization, we propose an easy-to-use and general-purpose approach for fast posterior sampling, which seamlessly pairs with sparse approximations to afford scalability both during training and at test time. In a series of experiments designed to test competing sampling schemes’ statistical properties and practical ramifications, we demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.

Generative Pretraining from Pixels

image.png

Authors: Mark Chen, Alec Radford, Rewon Child, Jeffrey K Wu, Heewoo Jun, David Luan, Ilya Sutskever

Institutions: OpenAI

Abstract: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine tuning, matching the top supervised pre-trained models. An even larger model trained on a mixture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features.

ICML 2020 is a virtual conference that continues through Saturday July 18.


Journalist: Fangyu Cai | Editor: Michael Sarazen


Image for post

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle.

Click here to find more reports from us.


We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Image for post

4 comments on “ICML 2020 Announces Outstanding Paper Awards

  1. Pingback: [N] ICML 2020 Announces Outstanding Paper Awards – tensor.io

  2. Pingback: ICML 2020 Announces Outstanding Paper Awards – Paper TL

  3. Pingback: AI DIGEST #JULY20 – MLT | MACHINE LEARNING TOKYO

  4. Pingback: ICML 2020 Announces Outstanding Paper Awards | Deeplearning.monster

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: