AI AI Weekly Research

Exciting Papers and Projects from Google

Synced Global AI Weekly February 10th

Subscribe to Synced Global AI Weekly


Google’s Lunar New Year AI Surprises
Google rang in the Lunar New Year with a couple of AI-powered treats: a new Live Transcribe service to help the deaf and hard of hearing, and a Google Doodle showcasing the ancient Chinese art of Shadow Puppetry.
(Synced) 


Towards Federated Learning at Scale: System Design
Federated Learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. Researchers have built a scalable production system for Federated Learning in the domain of mobile devices, based on TensorFlow. In this paper, they describe the resulting high-level design, sketch some of the challenges and their solutions, and touch upon the open problems and future directions.
(Google) 


Analyzing and Improving Representations with the Soft Nearest Neighbor Loss
Researchers explore and expand the Soft Nearest Neighbor Loss to measure the entanglement of class manifolds in representation space: i.e., how close pairs of points from the same class are relative to pairs of points from different classes. They demonstrate several use cases of the loss. As an analytical tool, it provides insights into the evolution of class similarity structures during learning.
(Geoffrey Hinton Team from Google Brain) 


The Hanabi Challenge: A New Frontier for AI Research
Google AI and DeepMindAI researchers have open-sourced the Hanabi Learning Environment, a testbed for collaborative multi-agent learning research. In this paper, they propose an experimental framework for the research community to evaluate algorithmic advances, and assess the performance of current state-of-the-art techniques.
(Paper) / (GitHub)


Open Sourcing ClusterFuzz
Fuzzing is an automated method for detecting bugs in software that works by feeding unexpected inputs to a target program. It is effective at finding memory corruption bugs, which often have serious security implications. Manually finding these issues is both difficult and time consuming, and bugs often slip through despite rigorous code review practices.
(Google Open Source) 

Technology

Recurrent Experience Replay in Distributed Reinforcement Learning
Researchers study the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyperparameters, the resulting agent, Recurrent Replay Distributed DQN, quadruples the previous state of the art on Atari-57, and matches the state of the art on DMLab-30. It is the first agent to exceed human-level performance in 52 of the 57 Atari games.
(DeepMind) 


Facebook Boosts Cross-Lingual Language Model Pretraining Performance
Facebook researchers have introduced two new methods for pretraining cross-lingual language models (XLMs). The unsupervised method uses monolingual data, while the supervised version leverages parallel data with a new cross-lingual language model.
(Synced) 


A Lyapunov-based approach for safe RL algorithms
In this work, researchers first formulate safety by imposing constraints and then propose a class of reinforcement learning algorithms that 1) learn optimal safe policies, and 2) do not generate policies that violate the constraints (unsafe policies), even during training. The main characteristic of our algorithms is the use of Lyapunov functions, a concept that has been extensively studied in control theory to analyze the stability of dynamical systems, to guarantee safety.
(Facebook AI Research)

You May Also Like

David vs Goliath: Clarifai CEO Matt Zeiler Takes On the Tech Giants
This is the first installment of the Synced Lunar New Year Project, a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this article, Synced chats with Clarifai Founder and CEO Matt Zeiler on recent progress in computer vision and his company’s plans for the future.
(Synced)


On Compilers: First TVM and Deep Learning Conference
Just a week after NeurIPS closed, the first TVM and Deep Learning Compiler Conference kicked off in Seattle. Researchers at University of Washington SAMPL (a collaboration between Sampa, Syslab, MODE, and PLSE) developed their “TVM” open-source deep learning compiler stack for CPUs, GPUs and specialized accelerators to close the gap between deep learning frameworks and hardware backends.
(Synced)

Global AI Events

Feb 12~15, IBM Think 2019 San Francisco, United States

March 17 – 20, ACM IUI. Los Angeles, United States

Global AI Opportunities

0 comments on “Exciting Papers and Projects from Google

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: