AI Machine Learning & Data Science Research

NeurIPS 2021 Announces Its 6 Outstanding Paper Awards, 2 Datasets and Benchmarks Track Best Paper Awards, and the Test of Time Award

The NeurIPS 2021 organizing committee has announced its paper awards, with six submissions receiving Outstanding Paper Awards, two papers recognized in the new Datasets and Benchmarks Track Best Paper Awards category, and one Test of Time Award.

As we approach year’s end, the AI community once again turns its attention to NeurIPS (Conference and Workshop on Neural Information Processing Systems), one of the world’s most prestigious such industry and academic gatherings. NeurIPS 2021 kicks off next week, and yesterday the organizing committee announced its six Outstanding Paper Awards, two selections in the new Datasets and Benchmarks Track Best Paper Awards category, and its Test of Time Award.

A total of 9,122 papers were submitted to NeurIPS 2021, and 2,344 were accepted. The acceptance rate of 26 percent (with 3 percent designated Spotlight papers) was slightly up from last year and the highest since 2013.

The top three contributing companies by accepted paper count were Google (177), Microsoft (116) and DeepMind (81).

The top three academic institutions by paper count were MIT (142 papers), Stanford University (139 papers), and CMU (117 papers). The University of California, Berkeley, ranked a close fourth (116 papers), while Tsinghua University ranked fifth with 90 papers and Peking University tied for eighth with 63 papers.

By country/region, the top three contributors were the United States (1431 papers), China (411 papers) and the United Kingdom (268 papers).

Six submissions were honoured as Outstanding Papers:

  • A Universal Law of Robustness via Isoperimetry
    by Sébastien Bubeck and Mark Sellke; Microsoft Research and Stanford University
    This paper proposes a theoretical model to explain why many state-of-the-art deep networks require many more parameters than are necessary to smoothly fit the training data, and offers a testable prediction of the model sizes needed to develop robust models for ImageNet classification.
  • On the Expressivity of Markov Reward
    by David Abel, Will Dabney, Anna Harutyunyan, Mark K. Ho, Michael Littman, Doina Precup, and Satinder Singh; DeepMind, Princeton University and Brown University
    This paper provides a careful, clear exposition of when Markov rewards are or are not sufficient to enable a system designer to specify a task; as well as the preferences over behaviours, or preferences over state and action sequences for a particular behaviour, which sheds light on the challenges of reward design and may open up future research into when and how the Markov framework is sufficient to achieve desirable performance.
  • Deep Reinforcement Learning at the Edge of the Statistical Precipice
    by Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, and Marc G. Bellemare; Google Research and Université de Montréal
    This paper suggests practical approaches to improve the rigor of deep reinforcement learning algorithm comparison.
  • MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers
    by Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui; University of Washington, Allen Institute for Artificial Intelligence and Stanford University
    This paper provides a divergence measure to compare the distribution of model-generated text with the distribution of human-generated text, which is crucial for the progress of open-ended text generation.
  • Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms
    by Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, and Adrien Taylor; INRIA – Université Paris Sciences et Lettres, Ecole Polytechnique Fédérale de Lausanne, Univ. Grenoble Alpes, MSR-Inria Joint Centre
    This paper describes a “continuized” version of Nesterov’s accelerated gradient method in which the two separate vector variables evolve jointly in continuous time. This new approach leads to a (randomized) discrete-time method that: 1) enjoys the same accelerated convergence as Nesterov’s method, 2) is easier to understand, and 3) avoids additional errors from discretizing a continuous-time process.
  • Moser Flow: Divergence-based Generative Modeling on Manifolds
    by Noam Rozen, Aditya Grover, Maximilian Nickel, and Yaron Lipman; Weizmann Institute of Science, Meta (Facebook) AI and UCLA
    This paper proposes a method for training continuous normalizing flow (CNF) generative models over Riemannian manifolds, with faster training times and superior test performance.

The recipient of the NeurIPS 2021 Test of Time Award is the 2010 paper Online Learning for Latent Dirichlet Allocation by Matthew Hoffman, David Blei, and Francis Bach; Princeton University and INRIA.
This paper presents a stochastic variational gradient-based inference procedure for training Latent Dirichlet Allocation (LDA) models on very large text corpora, resenting the first stepping stone for general stochastic gradient variational inference procedures for a much broader class of models, which has had a huge impact on the machine learning community.

Two papers were recognized in the new Datasets & Benchmarks Best Paper Awards category:

  • Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research
    by Bernard Koch, Emily Denton, Alex Hanna, and Jacob Gates Foster; University of California, Los Angeles and Google Research
    This paper analyzes thousands of papers and studies the evolution of dataset use within different machine learning subcommunities, as well as the interplay between dataset adoption and creation, acting as a “wake up call” for researchers to be more critical when selecting benchmark datasets, and to promote the creation of new and more varied datasets.
  • ATOM3D Tasks on Molecules in Three Dimensions
    by Raphael John Lamarre Townshend, Martin Vögele, Patricia Adriana Suriana, Alexander Derry, Alexander Powers, Yianni Laloudakis, Sidhika Balachandar, Bowen Jing, Brandon M. Anderson, Stephan Eismann, Risi Kondor, Russ Altman, and Ron O. Dror; Stanford University
    This paper introduces a collection of benchmark datasets with 3D representations of small molecules and/or biopolymers, and provides insights on how to choose and design models for a given task.

NeurIPS 2021 runs Monday, December 6 through Tuesday, December 14. Due to Covid-19 concerns, the conference is being held entirely virtually for the second year. The blog post Announcing the NeurIPS 2021 Award Recipients is on the NeurIPS website.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

0 comments on “NeurIPS 2021 Announces Its 6 Outstanding Paper Awards, 2 Datasets and Benchmarks Track Best Paper Awards, and the Test of Time Award

Leave a Reply

Your email address will not be published. Required fields are marked *