AI Conference Research

ICML 2018 Announces Best Paper Awards

The International Conference on Machine Learning (ICML) 2018 will be held July 10 - 15 in Stockholm, Sweden. Yesterday, from more than 600 accepted papers, the prestigious conference announced its Best Paper Awards.

The International Conference on Machine Learning (ICML) 2018 will be held July 10 – 15 in Stockholm, Sweden. Yesterday, from more than 600 accepted papers, the prestigious conference announced its Best Paper Awards.

Two papers shared top honours. Researchers Anish Athalye of MIT and Nicholas Carlini and David Wagner of UC Berkeley’s Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples; and Delayed Impact of Fair Machine Learning, from a UC Berkeley research group led by Lydia T. Liu and Sarah Dean.

The Best Paper Runner Up Awards go to Near Optimal Frequent Directions for Sketching Dense and Sparse Matrices, from Professor Zengfeng Huang of Fudan University; The Mechanics of n-Player Differentiable Games from DeepMind and University of Oxford’s David Balduzzi and Sebastien Racaiere, James Martens, Jakob Foerster, Karl Tuyls and Thore Graepel; and Fairness Without Demographics in Repeated Loss Minimization, from a Stanford research group including Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang.
Below are the abstracts of the five papers.

Best Paper Awards:

Paper: https://arxiv.org/abs/1802.00420
Github: https://github.com/anishathalye/obfuscated-gradients

Abstract: We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

 

Paper: https://arxiv.org/abs/1803.04383

Abstract: Fairness in machine learning has predominantly been studied in static classification settings without concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect.
We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not.
We completely characterize the delayed impact of three standard criteria, contrasting the regimes in which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably.
Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.

Best Paper Runner Up Awards:

Paper: http://www.cse.ust.hk/~huangzf/ICML18.pdf

Abstract: Given a large matrix A ∈ R n×d , we consider the problem of computing a sketch matrix B ∈ R `×d which is significantly smaller than but still well approximates A. We are interested in minimizing the covariance error 

We consider the problems in the streaming model, where the algorithm can only make one pass over the input with limited working space. The popular Frequent Directions algorithm of (Liberty, 2013) and its variants achieve optimal space-error tradeoff. However, whether the running time can be improved remains an unanswered question. In this paper, we almost settle the time complexity of this problem. In particular, we provide new space-optimal algorithms with faster running times. Moreover, we also show that the running times of our algorithms are near-optimal unless the state-of-the-art running time of matrix multiplication can be improved significantly.

 

Paper: https://arxiv.org/abs/1802.05642

Abstract: The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood – and is becoming increasingly important as adversarial and multiobjective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs – whilst at the same time being applicable to – and having guarantees in – much more general games.

 

Paper: https://arxiv.org/abs/1806.08010

Abstract: Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity— minority groups (e.g., non-native speakers) contribute less to the training objective and thus tend to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the status quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minority group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.


Author: Jessie Geng | Editor: Michael Sarazen

1 comment on “ICML 2018 Announces Best Paper Awards

  1. Great blog right here! Also your website so much up very fast!
    What web host are you using? Can I get your associate link in your host?

    I wish my website loaded up as fast as yours lol

Leave a Reply to Alcohol Rehab Cancel reply

Your email address will not be published. Required fields are marked *