AI Conference Research

ICLR 2020 Accepted Papers Announced

The ICLR 2020 conference programme chairs finally put the selection process behind them, announcing 687 out of 2594 papers had made it to ICLR 2020 — a 26.5 percent acceptance rate.

The International Conference on Learning Representations ICLR 2020 is four months away but has already attracted more than its share of drama with a deluge of submissions and doubts about the qualifications of some reviewers. Yesterday the conference programme chairs finally put the selection process behind them, announcing 687 out of 2594 papers had made it to ICLR 2020 — a 26.5 percent acceptance rate.


ICLR 2020 will be held in Addis Ababa, Ethiopia from April 26 to 30. This will be the first trip to Africa for a major AI conference, a move long-encouraged by many leading AI researchers.

All accepted papers will be presented as posters as usual, while 23 percent will have an oral presentation. Of these, 108 papers will have four minute spotlights and 48 papers 10-minute talks.

A small number of papers — less than 20 — were rejected for violations of the dual submission policy.

To encourage clearer decisions from the 119 area chairs and some 2200 reviewers, this year’s rating system removed the option of assigning a neutral rating. Reviewers choose between reject, weak reject, weak accept and accept.

Two adjustments have also been made to the paper rating system — a ban on public comments midway through the discussion period and an explicit week for substitute and emergency reviewing.

The chairs will now shift their attention to selecting the conference’s two best paper awards and building up the presentation timetable.

Each reviewer’s reject / weak reject / weak accept / accept produced an asymmetric score — 1, 3, 6 or 8 — for a paper. According to the ICLR2020 Open Review Explorer, 34 papers are known to have scored the highest with an average of 8. Here’s a list:

NAS-Bench-102: Extending the Scope of Reproducible Neural Architecture Search
: Xuanyi Dong, Yi Yang
Institution: University of Technology Sydney
Keywords: Neural Architecture Search, AutoML, Benchmark

On The “Steerability” of Generative Adversarial Networks
: Ali Jahanian、Lucy Chai、Phillip Isola
Institution: Massachusetts Institute of Technology
Keywords: Generative Adversarial Network, Latent Space Interpolation, Dataset Bias, Model Generalization

A Generalized Training Approach for Multiagent Learning
: Paul Muller, Shayegan Omidshafiei, Mark Rowland, Karl Tuyls, Julien Perolat, Siqi Liu, Daniel Hennes, Luke Marris, Marc Lanctot, Edward Hughes, Zhe Wang, Guy Lever, Nicolas Heess, Thore Graepel, Remi Munos
Institution: Google
Keywords: Multiagent Learning, Game Theory, Training, Games

Mirror-Generative Neural Machine Translation
: Zaixiang Zheng, Hao Zhou, Shujian Huang, Lei Li, Xin-Yu Dai, Jiajun Chen
Institutions: Nanjing University, ByteDance
Keywords: Neural Machine Translation, Generative Model, Mirror

Understanding and Robustifying Differentiable Architecture Search
: Arber Zela, Thomas Elsken, Tonmoy Saikia, Yassine Marrakchi, Thomas Brox, Frank Hutter
Institutions: Technical University Munich, University of Freiburg
Keywords: Neural Architecture Search, AutoML, AutoDL, Deep Learning, Computer Vision

Sparse Coding with Gated Learned ISTA
: Kailun Wu, Yiwen Guo, Ziang Li, Changshui Zhang
Institutions: Tsinghua University, Bytedance
Keywords: Sparse Coding, Deep Learning, Learned ISTA, Convergence Analysis

The Logical Expressiveness of Graph Neural Networks
: Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva
Institutions: Universidad de Chile, Millenium Instititute for Foundational Research on Data, Pontificia Universidad Católica
Keywords: Graph Neural Networks, First Order Logic, Expressiveness

Implementation Matters in Deep RL: A Case Study on PPO and TRPO
: Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry
Institution: Massachusetts Institute of Technology
Keywords: Deep Policy Gradient Methods, Deep Reinforcement Learning, Trpo, Ppo

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
: Donghyun Na, Hae Beom Lee, Hayeon Lee, Saehoon Kim, Minseop Park, Eunho Yang, Sung Ju Hwang
Institutions: Korea Advanced Institute of Science and Technology, POSTECH
Keywords: Meta-Learning, Few-Shot Learning, Bayesian Neural Network, Variational Inference, Learning to Learn, Imbalanced and out-of-Distribution Tasks for Few-Shot Learning

Recurrent Hierarchical Topic-Guided Neural Language Models
: Dandan Guo, Bo Chen, Ruiying Lu, Mingyuan Zhou
Institutions: Xidian University, University of Texas, Austin
Keywords: Bayesian Deep Learning, Recurrent Gamma Belief Net, Larger-Context Language Model, Variational Inference, Sentence Generation, Paragraph Generation

Depth-Width Trade-offs for ReLU Networks via Sharkovsky’s Theorem
: Vaggos Chatziafratis, Sai Ganesh Nagarajan, Ioannis Panageas, Xiao Wang
Institutions: Stanford University, Singapore University of Technology and Design
Keywords: Depth-Width Trade-Offs, ReLU Networks, Chaos Theory, Sharkovsky Theorem, Dynamical Systems

GenDICE: Generalized Offline Estimation of Stationary Values
: Ruiyi Zhang, Bo Dai, Lihong Li, Dale Schuurmans
Institutions: Duke University, Google
Keywords: Off-policy Policy Evaluation, Reinforcement Learning, Stationary Distribution Correction Estimation, Fenchel Dual

FreeLB: Enhanced Adversarial Training for Language Understanding
: Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, Jingjing Liu
Institutions: University of Maryland, Microsoft,

Why Gradient Clipping Accelerates Training: A Theoretical Justification for Adaptivity
Authors: Jingzhao Zhang, Tianxing He, Suvrit Sra, Ali Jadbabaie
Institution: Massachusetts Institute of Technology
Keywords: Adaptive Methods, Optimization, Deep Learning

Principled Weight Initialization for Hypernetworks
: Oscar Chang, Lampros Flokas, Hod Lipson
Institution: Columbia University
Keywords: Hypernetworks, Initialization, Optimization, Meta-Learning

Enhancing Adversarial Defense by k-Winners-Take-All
: Chang Xiao, Peilin Zhong, Changxi Zheng
Institution: Columbia University
Keywords: Adversarial Defense, Activation Function, Winner Takes All

Dynamics-Aware Unsupervised Skill Discovery
: Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, Karol Hausman
Institution: Google
Keywords: Reinforcement Learning, Unsupervised Learning, Model-Based Learning, Deep Learning, Hierarchical Reinforcement Learning

Differentiable Reasoning over a Virtual Knowledge Base
: Bhuwan Dhingra, Manzil Zaheer, Vidhisha Balachandran, Graham Neubig, Ruslan Salakhutdinov, William W. Cohen
Institutions: Carnegie Mellon University, Google
Keywords: Question Answering, Multi-Hop QA, Deep Learning, Knowledge Bases, Information Extraction, Data Structures for QA

Data-dependent Gaussian Prior Objective for Language Generation
: Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
Institutions: Shanghai Jiao Tong University, National Institute of Information and Communications Technology
Keywords: Gaussian Prior Objective, Language Generation

Geometric Analysis of Nonconvex Optimization Landscapes for Overcomplete Learning
: Qing Qu, Yuexiang Zhai, Xiao Li, Yuqian Zhang, Zhihui Zhu
Institutions: New York University, University of California Berkeley, The Chinese University of Hong Kong, Columbia University, Johns Hopkins University
Keywords: Dictionary Learning, Sparse Representations, Nonconvex Optimization

Mathematical Reasoning in Latent Space
: Dennis Lee, Christian Szegedy, Markus Rabe, Sarah Loos, Kshitij Bansal
Institution: Google
Keywords: Machine Learning, Formal Reasoning

Contrastive Learning of Structured World Models
: Thomas Kipf, Elise van der Pol, Max Welling
Institution: University of Amsterdam
Keywords: State Representation Learning, Graph Neural Networks, Model-Based Reinforcement Learning, Relational Learning, Object Discovery

Rotation-Invariant Clustering of Functional Cell Types in Primary Visual Cortex
: Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker
Institutions: University of Tübingen, Baylor College of Medicine, University of Goettingen
Keywords: Computational Neuroscience, Neural System Identification, Functional Cell Types, Deep Learning, Rotational Equivariance

Optimal Strategies Against Generative Attacks
: Roy Mor, Erez Peterfreund, Matan Gavish, Amir Globerson
Institutions: Tel Aviv University, Hebrew University of Jerusalem

CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning
: Rohit Girdhar, Deva Ramanan
Institution: Carnegie Mellon University

Causal Discovery with Reinforcement Learning
: Shengyu Zhu, Ignavier Ng, Zhitang Chen
Institutions: Huawei, University of Toronto
Keywords: Causal Discovery, Structure Learning, Reinforcement Learning, Directed Acyclic Graph

Simplified Action Decoder for Deep Multi-Agent Reinforcement Learning
: Hengyuan Hu, Jakob N Foerster
Institutions: Carnegie Mellon University, University of Oxford
Keywords: Multi-Agent RL, Theory of Mind

Smooth Markets: A Basic Mechanism for Organizing Gradient-Based Learners
: David Balduzzi, Wojciech M. Czarnecki, Edward Hughes, Joel Leibo, Ian Gemp, Tom Anthony, Georgios Piliouras, Thore Graepel
Institution: Google Deepmind
Keywords: Game Theory, Optimization, Gradient Descent, Adversarial Learning

Meta-Learning with Warped Gradient Descent
: Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, Raia Hadsell
Institutions: Google, University College London, Politecnico di Milano
Keywords: Meta-Learning, Transfer Learning

Differentiation of Blackbox Combinatorial Solvers
: Marin Vlastelica Pogančić, Anselm Paulus, Vit Musil, Georg Martius, Michal Rolinek
Institutions: Max-Planck Institute, Università degli Studi di Firenze
Keywords: Combinatorial Algorithms, Deep Learning, Representation Learning, Optimization

A Theory of Usable Information under Computational Constraints
: Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, Stefano Ermon
Institutions: Peking University, Stanford University, University of Southern California

How much Position Information Do Convolutional Neural Networks Encode?
: Md Amirul Islam, Sen Jia, Neil D. B. Bruce
Institution: Ryerson University
Keywords: Network Understanding, Absolute Position Information

BackPACK: Packing more into Backprop
: Felix Dangel, Frederik Kunstner, Philipp Hennig
Institutions: University of Tuebingen, University of British Columbia

A full list of the accepted papers for ICLR 2020 is available here.

Journalist: Yuan Yuan | Editor: Michael Sarazen

0 comments on “ICLR 2020 Accepted Papers Announced

Leave a Reply

Your email address will not be published. Required fields are marked *