AI Machine Learning & Data Science Research

NeurIPS 2022 Announces Its Outstanding Main Track Papers, Outstanding Dataset & Benchmark Papers, and Test of Time Award

The NeurIPS 2022 organizing committee has announced its annual awards, recognizing 13 Outstanding Papers, two in the Datasets & Benchmarks category, and a Test of Time Paper.

The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is one of the most influential annual meetings for presenting and sharing research on neural information processing systems in their biological, technological, mathematical, and theoretical aspects. The NeurIPS 2022 organizing committee announced its coveted awards this week, recognizing thirteen Outstanding Papers, two in the Datasets & Benchmarks category, and a Test of Time paper.

Thirteen submissions were honoured as Outstanding Papers:

  • Is Out-of-distribution Detection Learnable? by Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu (University of Technology Sydney, University of Wisconsin-Madison, Chinese Academy of Sciences, ETH Zurich, Hong Kong Baptist University, University of Melbourne)

The paper provides a rigorous theoretical study of out-of-distribution (OOD) detection (whether an input is ID/OOD) using the probably approximately correct (PAC) learning theory and unpacks different practical scenarios. It is beneficial to the AI community as it shows when and how OOD can work in real applications; and can serve as a guideline to OOD detection algorithm designing.

This work leverages large transformer language models and diffusion models to generate photorealistic images with deep language understanding. It demonstrates that large language models pretrained on text-only corpora are effective for text-to-image generation and can also benefit from model scaling; and achieves a new state-of-the-art FID score of 7.27 on the COCO dataset.

The paper explores the algorithmic design space of diffusion models, which is beneficial to the field of deep generative modelling using diffusion models. It also serves as an excellent survey of diffusion models while providing generally applicable improvements for both sampling and training that lead to new state-of-the-art results.

The paper presents ProcThor, a framework for generating interactive 3D environments from an underlying distribution of room and object layouts that achieves state-of-the-art results over a wide range of embodied-AI tasks that rely on RGB images only.

The paper provides a clean approach to instilling humanlike inductive biases into neural networks. The approach improves generalization and performance, and the team empirically shows that it leads to more humanlike behaviour in downstream meta-reinforcement learning agents.

  • A Neural Corpus Indexer for Document Retrieval by Yujing Wang, Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen, Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang (Microsoft, Tsinghua University, University of Illinois, Peking University)

This paper presents a framework that uses query to directly predict a document ID. The proposed Neural Corpus Indexer achieves very promising results on Natural Questions 320k, outperforming baseline generative retrieval models by a large margin.

The paper explores the complexity and high-dimensional scaling limits of stochastic gradient descent (SGD) with constant step-size to improve understanding of various estimation tasks.

The paper proposes a simple and elegant modification to backpropagation that enables hypergradients to be computed automatically, which significantly reduces the manual efforts required to generalize algorithms to other optimizers and hyperparameters beyond the learning rate.

This paper generalizes score-based generative models into compact Riemannian manifolds, addressing the challenge of designing manifold-valued single gaussian models (SGMs).

The paper introduces a new type of control variate (CV) — a variance reduction technique based on Stein operators for discrete distributions — that greatly reduces gradient estimates in a discrete setting while outperforming a variety of baselines.

  • An Empirical Analysis of Compute-optimal Large Language Model Training by Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Oriol Vinyals, Jack William Rae, Laurent Sifre (DeepMind)

This work demonstrates that current transformer-based large language models (LLM) are undertrained and proposes several predictive approaches for optimally setting model size and training duration.

The paper demonstrates in theory and practice that power law scaling of error with respect to dataset size can be mitigated by utilizing intelligent data pruning metrics in large-scale settings.

The paper presents optimal sample complexity bounds for several multi-distribution learning problems. The researchers obtain near-optimal rates for agnostic collaborative learning, group DRO, and agnostic federated learning, beating prior state-of-the-art models by a large margin.

NeurIPS 2022 also announced LAION-5B: An Open Large-scale Dataset for Training Next Generation Image-Text Models as its Outstanding Datasets paper and MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge as the Outstanding Benchmarks paper.

Last but certainly not least, ImageNet Classification with Deep Convolutional Neural Networks by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (aka the “AlexNet paper”) was unanimously selected for the NeurIPS 2022 Test of Time award. The influential 2012 paper introduced the first convolutional neural network (CNN) trained on the ImageNet database to surpass state-of-the-art results of the time by a large margin.

NeurIPS 2022 is a hybrid conference that runs from November 28 through December 9. The first week will be held at the New Orleans Convention Center in the US, and the second week will be a virtual gathering.

The post Announcing the NeurIPS 2022 Awards is on the NeurIPS website.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

0 comments on “NeurIPS 2022 Announces Its Outstanding Main Track Papers, Outstanding Dataset & Benchmark Papers, and Test of Time Award

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: