AI Conference Feature United States

NIPS: 2017 Day 1 & 2 Highlights

NIPS has become the world’s leading conference on machine learning and computational neuroscience, with 8,000 attendees flocking to Long Beach, California this week for NIPS 2017, the most popular yet.

The Neural Information Processing Systems Conference (NIPS) has come a long way since 100 invitation-only researchers gathered in Denver 31 years ago. NIPS has become the world’s leading conference on machine learning and computational neuroscience, with 8,000 attendees flocking to Long Beach, California this week for NIPS 2017, the most popular yet.

In the opening keynote speech Program Chair Samy Bengio thanked his team for making the conference happen, announced the best paper of this year award, and presented the official stats for NIPS 2017.

1_UyDxaLV_JB76e3l_1L8iCQ.jpeg

NIPS 2017 Official Stats

8,000 Registered Attendees
7 Invited Speakers
2 Parallel Tracks (mixed orals and spotlights)
3 Poster Sessions

3,240 submissions were accepted for review: a record 30% increase and almost twice the total submissions to this year’s ICML (International Conference on Machine Learning).

156 Subareas: a 150% increase from last year. As lower-level subject area has increased, the area structure has been re-structured to 156 lower-level areas and 9 top-level subjects.

1_0bU01lICh-VtOEZ6CppWyA.png

To ensure fairness in the paper review process, the following constraints were in effect for Reviewers:

  • No conflicts of interest
  • Maximize positive bids and minimize negative bids
  • No more than 2 reviewers from any institution
  • No more than 6 papers per reviewer
  • No more than 18 papers per area chair

There were 9,747 reviews with at least 3 reviews per submission. Although the acceptance rate for papers dipped to 21%, the high submission volume saw 679 papers accepted for presentation, a 90% increase over last year.

Each year Program Chairs refresh the NIPS agenda. This year five new official competitions made it to the conference out of 23 proposals: 1) Conversational Intelligence (chatbots); 2) Human-Computer QA : similar to the game show Jeopardy!; 3) Learning to Run: The goal is to train human avatars to learn using pure reinforcement learning; 4) Personalized Medicine; and 5) Adversarial Attacks/Defense.

1_Cwipyvg2lKykKdp9xKg37g.png

 

Day 1 Highlights

After Opening Remarks, the first invited talk was Powering the Next 100 Yearsdelivered by Google engineer John Platt.

NIPS Tutorials are intended to present new or mature approaches that broaden one’s research interests. There were three parallel Tutorials on Day 1. Synced paid particular interest to the Deep Learning sessions and we share some takeaways below.

1_yygtoIQq-EI9P_eNNMfLXQ.png

Deep Learning: Practice and Trends — as the name suggests, this tutorial discussed state-of-art deep neural networks and applications enhanced by the technology. The talk was given by Professor Nando de Freitas from Oxford University, who pointed out that deep learning consists of three main building blocks: I/O Modalities, architectures, and losses.

Prof. De Freitas said tailoring building blocks to different problems is a key to success. Even with good architecture, wrongly designed objectives can result in failure.

With regard to trends, Prof. De Freitas introduced a variety of successful applications: Autoregressive Model, Domain Alignment, Meta-Learning, Graph Network and Program Induction.

In the Deep Probabilistic Modelling with Gaussian Processes session, Professor Neil Lawrence from the University of Sheffield discussed modeling neural networks from a probabilistic perspective, with the assumption that models can be represented by Gaussian processes.

Geometric Deep Learning on Graphs and Manifolds was the most mathematically challenging session of the day, introducing measures of non-Euclidean space, referring to graph and manifolds in applications, and attempting to bridge this notion to convolutional neural networks.

Powering the Next 100 Years was an invited talk presented by Google Principal Scientist John Platt, who opined that a worldwide shortage of electrical power will develop by 2100. In Platt’s view the most effective way to answer that will be generating clean energy by simulating the process in the sun’s inner core: fusion. But such research is difficult and even dangerous. How can machine learning contribute? Platt’s team is working closely with physicians to explore the safe zones of experiment parameters.

Among the most popular visuals in the Poster Session were:

  1. Unsupervised Sequence Classification using Sequential Output Statistics.
  2. A Meta-Learning Perspective on Cold-Start Recommendations for Items
  3. Gradient Episodic Memory for Continual Learning

 

Day 2 Highlights

The star of Day 2 was a session introducing the paper Dynamic Routing Between Capsules. Even though it was only a five-minute spotlight, the hall was already full well before the session began. The papers Safe and Nested Subgame Solving for Imperfect-Information Games and A Linear-Time Kernel Goodness-of-Fit Test also garnered much attention.

Speaking at the Test-of-Time Award presentation, Ali Rahimi recalled when he and colleague Ben Recht worked together in 2007, when what he calls the ‘NIPS rigor police’ were reviewing papers. He argued that “machine learning has become alchemy — alchemy worked, it helped invented many things.”

1_O-x806b6ESw1t8qLF-fORQ.jpeg
Alchemy image used by Ali Rahimi as a metaphor

“If you are building photo sharing systems, alchemy is OK,” said Rahami. “But we are beyond that now. We are building systems that govern healthcare and mediate our civic dialogue, we influence elections. I would like to live in a society where systems are built on top of verifiable, rigorous thorough knowledge and not alchemy. As aggravating as the NIPS rigor police were, I miss them and I wish for them to come back.

1_ik6LeCb0IxeljPWwbSIuzg.png

Best Paper Award presentations were held in the Theory and Algorithmsessions. Speakers began by outlining fundamental problems in their fields, then explained how their work differs from previous approaches. From their illustrations, it was easy to conclude that the greatest work comes from well-defined research goals and clear boundaries.

Google’s Nicholas Frosst, co-author of the paper Dynamic Routing Between Capsules illustrated the basic idea behind capsules, then presented surprisingly good results in his close remarks. A number of details were not covered due to time limits, which lead to a crazy, packed poster session: Frosst continued speaking and answering questions for at least three hours.

NIPS 2017 continues until December 9 at the Long Beach Convention and Entertainment Center. The conference is completely sold out.


Journalist: Qintong Wu, Chain Zhang | Editor: Michael Sarazen, Meghan Han

0 comments on “NIPS: 2017 Day 1 & 2 Highlights

Leave a Reply

Your email address will not be published.

%d bloggers like this: