NeurIPS 2019 Outstanding Paper Awards
The NeurIPS organizing committee announced the conference’s Outstanding Paper and other awards, and introduced a Outstanding New Directions Paper Award to “highlight work that distinguished itself in setting a novel avenue for future research.”
Outstanding Paper Award
Distribution-Independent PAC Learning of Halfspaces with Massart Noise
Outstanding New Directions Paper Award
Uniform convergence may be unable to explain generalization in deep learning
Honorable Mention Outstanding Paper Award
Nonparametric Density Estimation & Convergence Rates for GANs under Besov IPM LossesFast and Accurate Least-Mean-Squares Solvers
Honorable Mention Outstanding New Directions Paper Award
Putting An End to End-to-End: Gradient-Isolated Learning of RepresentationsScene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
NeurIPS 2019 | Featured Talks
Yoshua Bengio | From System 1 Deep Learning to System 2 Deep Learning
Vivienne Sze | Efficient Processing of Deep Neural Network: from Algorithms to Hardware Architectures
Celeste Kidd | How to Know
Mohammad Emtiyaz Khan | Deep Learning with Bayesian Principles
Blaise Aguera y Arcas | Social Intelligence
NeurIPS 2019 | The Numbers
Synced takes a look at the numbers associated with NeurIPS 2019.
The NeurIPS organizing committee also announced that after a Vancouver repeat next year, NeurIPS 2021 will head down under to Sydney.
Dota 2 with Large Scale Deep Reinforcement Learning
OpenAI researchers have used the multiplayer video game Dota 2 as a research platform for general-purpose AI systems. Their Dota 2 AI, called OpenAI Five, learned by playing over 10,000 years of games against itself. It demonstrated the ability to achieve expert-level performance, learn human–AI cooperation, and operate at internet scale.
(OpenAI Paper) / (OpenAI Five 2016–2019)
Artificial Intelligence Index Report 2019
The AI Index Report tracks, collates, distills, and visualizes data relating to artificial intelligence. Its mission is to provide unbiased, rigorously-vetted data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI.
Training Agents Using Upside-Down Reinforcement LearningTraditional Reinforcement Learning (RL) algorithms either predict rewards with value functions or maximize them using policy search. Researchers study an alternative: Upside-Down Reinforcement Learning (Upside-Down RL or RL), that solves RL problems primarily using supervised learning techniques.
(NNAISENSE & The Swiss AI Lab IDSIA)
You May Also Like
AI Reimagines Ancient Chinese Poetry
Due to the nuanced character choices and other unique literal and aesthetical characteristics, automatic generation of Chinese poetry is challenging for AI, and high-quality poems can hardly be generated by end-to-end methods.
Stanford, Kyoto & Georgia Tech Model ‘Neutralizes’ Biased Language
Pryzant and other Stanford researchers partnered with researchers from Kyoto University and Georgia Institute of Technology to develop a novel natural language model that can identify and neutralize biased framings, presuppositions, attitudes, etc. in text.
Global AI Events
January 7–10: CES 2020 in Las Vegas, United States
February 7–12: AAAI 2020 in New York, United States
February 24–27: Mobile World Congress in Barcelona, Spain
March 23–26: GPU Technology Conference (GTC) in San Jose, United States