Hello Quantum World! Google Publishes Landmark Quantum Supremacy Claim
A Google research team says that its quantum computer carried out a specific calculation that is beyond the practical capabilities of regular, ‘classical’ machines. The same calculation would take even the best classical supercomputer 10,000 years to complete, Google estimates.
(Nature)/ (Google AI Blog)
Milestone: BERT Boosts Google Search
In what the company calls “the biggest leap forward in the past five years, and one of the biggest leaps forward in the history of Search,” Google announced that it has leveraged its pretrained language model BERT to dramatically improve the understanding of search queries.
(Google) / (Synced)
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Google researchers explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks.
AI Benchmark: All About Deep Learning on Smartphones in 2019
In this paper, researchers evaluate the performance and compare the results of all chipsets from Qualcomm, HiSilicon, Samsung, MediaTek and Unisoc that are providing hardware acceleration for AI inference. They also discuss the recent changes in the Android ML pipeline and provide an overview of the deployment of deep learning models on mobile devices.
(ETH Zurich & Google Research & Samsung & Huawei & Qualcomm & MediaTek & Unisoc)
Grounded Human-Object Interaction Hotspots From Video
Researchers propose an approach to learn human-object interaction “hotspots” directly from video. Rather than treat affordances as a manually supervised semantic segmentation task, the proposed approach learns about interactions by watching videos of real human behavior and anticipating afforded actions.
Teacher Algorithms for Curriculum Learning of Deep RL in Continuously Parameterized Environments
Researchers consider the problem of how a teacher algorithm can enable an unknown Deep Reinforcement Learning (DRL) student to become good at a skill over a wide range of diverse environments. To do so, they study how a teacher algorithm can learn to generate a learning curriculum, whereby it sequentially samples parameters controlling a stochastic procedural generation of environments.
(Inria & Microsoft Research)
You May Also Like
Harvard & Google Seismic Paper Hit With Rebuttals: Is Deep Learning Suited to Aftershock Prediction?
The aftershocks that follow an earthquake can be even more dangerous and damaging than the main temblor, for example by collapsing already structurally weakened buildings. With deep learning emerging as something of a panacea in the world of science, AI researchers and seismologists alike are leveraging the tech in pursuit of better aftershock forecast solutions.
Alibaba Open-Sources Its MCU to Boost AI Research
Alibaba’s chip subsidiary Pingtouge (平头哥) has become the first Chinese company to open-source its Microcontroller (MCU) design platform. Alibaba made the announcement at the Wuzhen Internet Conference on October 21.
Global AI Events
October 27-November 3: International Conference on Computer Vision (ICCV) in Seoul,South Korea
October 30-November 1st: Conference on Robot Learning (CoRL) 2019 in Osaka, Japan
November 8: MLconf in San Francisco, United States
November 13-14: AI & Big Data Expo in Santa Clara, United States