The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is one of the most influential annual meetings for presenting and sharing research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. The NeurIPS 2022 organizing committee announced its coveted awards this week, recognizing thirteen Outstanding Papers, two in the Datasets & Benchmarks category, and a Test of Time paper.
Thirteen submissions were honoured as Outstanding Papers:
- Is Out-of-distribution Detection Learnable? by Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu (University of Technology Sydney, University of Wisconsin-Madison, Chinese Academy of Sciences, ETH Zurich, Hong Kong Baptist University, University of Melbourne)
The paper provides a rigorous theoretical study of out-of-distribution (OOD) detection (whether an input is ID/OOD) using the probably approximately correct (PAC) learning theory and unpacks different practical scenarios. It is beneficial to the AI community as it shows when and how OOD can work in real applications; and can serve as a guideline to OOD detection algorithm designing.
- Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding by Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo-Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, Mohammad Norouzi (Google Brain)
This work leverages large transformer language models and diffusion models to generate photorealistic images with deep language understanding. It demonstrates that large language models pretrained on text-only corpora are effective for text-to-image generation and can also benefit from model scaling; and achieves a new state-of-the-art FID score of 7.27 on the COCO dataset.
- Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila, Samuli Laine (NVIDIA)
The paper explores the algorithmic design space of diffusion models, which is beneficial to the field of deep generative modelling using diffusion models. It also serves as an excellent survey of diffusion models while providing generally applicable improvements for both sampling and training that lead to new state-of-the-art results.
- ProcTHOR: Large-Scale Embodied AI Using Procedural Generation by Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Kiana Ehsani, Jordi Salvador, Winson Han, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi (Allen Institute for AI, University of Washington)
The paper presents ProcThor, a framework for generating interactive 3D environments from an underlying distribution of room and object layouts that achieves state-of-the-art results over a wide range of embodied-AI tasks that rely on RGB images only.
- Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines by Sreejan Kumar, Carlos G Correa, Ishita Dasgupta, Raja Marjieh, Michael Hu, Robert D. Hawkins, Jonathan Cohen, Nathaniel Daw, Karthik R Narasimhan, Thomas L. Griffiths (Princeton University, DeepMind)
The paper provides a clean approach to instilling humanlike inductive biases into neural networks. The approach improves generalization and performance, and the team empirically shows that it leads to more humanlike behaviour in downstream meta-reinforcement learning agents.
- A Neural Corpus Indexer for Document Retrieval by Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang (Microsoft, Tsinghua University, University of Illinois, Peking University)
This paper presents a framework that uses query to directly predict a document ID. The proposed Neural Corpus Indexer achieves very promising results on Natural Questions 320k, outperforming baseline generative retrieval models by a large margin.
- High-dimensional Limit Theorems for SGD: Effective Dynamics and Critical Scaling by Gerard Ben Arous, Reza Gheissari, Aukosh Jagannath (New York University, UC Berkeley, University of Waterloo)
The paper explores the complexity and high-dimensional scaling limits of stochastic gradient descent (SGD) with constant step-size to improve understanding of various estimation tasks.
- Gradient Descent: The Ultimate Optimizer by Kartik Chandra, Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer (MIT, Meta)
The paper proposes a simple and elegant modification to backpropagation that enables hypergradients to be computed automatically, which significantly reduces the manual efforts required to generalize algorithms to other optimizers and hyperparameters beyond the learning rate.
- Riemannian Score-Based Generative Modelling by Valentin De Bortoli, Emile Mathieu, Michael John Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet (PSL University, Oxford University)
This paper generalizes score-based generative models into compact Riemannian manifolds, addressing the challenge of designing manifold-valued single gaussian models (SGMs).
- Gradient Estimation with Discrete Stein Operators by Jiaxin Shi, Yuhao Zhou, Jessica Hwang, Michalis Titsias, Lester Mackey (Stanford University, Tsinghua University, DeepMind, Microsoft)
The paper introduces a new type of control variate (CV) — a variance reduction technique based on Stein operators for discrete distributions — that greatly reduces gradient estimates in a discrete setting while outperforming a variety of baselines.
- An Empirical Analysis of Compute-optimal Large Language Model Training by Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack William Rae, Laurent Sifre (DeepMind)
This work demonstrates that current transformer-based large language models (LLM) are undertrained and proposes several predictive approaches for optimally setting model size and training duration.
- Beyond Neural Scaling Laws: Beating Power Law Scaling via Data Pruning by Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos (Stanford University, University of Tübingen, Meta AI)
The paper demonstrates in theory and practice that power law scaling of error with respect to dataset size can be mitigated by utilizing intelligent data pruning metrics in large-scale settings.
- On-Demand Sampling: Learning Optimally from Multiple Distributions by Nika Haghtalab, Michael Jordan, Eric Zhao (UC Berkeley)
The paper presents optimal sample complexity bounds for several multi-distribution learning problems. The researchers obtain near-optimal rates for agnostic collaborative learning, group DRO, and agnostic federated learning, beating prior state-of-the-art models by a large margin.
NeurIPS 2022 also announced LAION-5B: An Open Large-scale Dataset for Training Next Generation Image-Text Models as its Outstanding Datasets paper and MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge as the Outstanding Benchmarks paper.
Last but certainly not least, ImageNet Classification with Deep Convolutional Neural Networks by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (aka the “AlexNet paper”) was unanimously selected for the NeurIPS 2022 Test of Time award. The influential 2012 paper introduced the first convolutional neural network (CNN) trained on the ImageNet database to surpass state-of-the-art results of the time by a large margin.
NeurIPS 2022 is a hybrid conference that runs from November 28 through December 9. The first week will be held at the New Orleans Convention Center in the US, and the second week will be a virtual gathering.
The post Announcing the NeurIPS 2022 Awards is on the NeurIPS website.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “NeurIPS 2022 Announces Its Outstanding Main Track Papers, Outstanding Dataset & Benchmark Papers, and Test of Time Award”