AI AI Weekly

Google Reduces BERT Pre-Training Time; DeepMind AI Fails High School Math Test

Synced Global AI Weekly April 7th

Subscribe to Synced Global AI Weekly

New Google Brain Optimizer Reduces BERT Pre-Training Time From Days to Minutes
Google Brain researchers have proposed LAMB (Layer-wise Adaptive Moments optimizer for Batch training), a new optimizer which reduces training time for its NLP training model BERT (Bidirectional Encoder Representations from Transformers) from three days to just 76 minutes.
(Synced) /(Google & UC Berkeley & UCLA)

DeepMind Analyzes Mathematical Reasoning Abilities of Neural Models
One might imagine that smart machines would have an easy time with mathematics — but as a domain, math remains relatively unexplored by AI. The new DeepMind paper Analyzing Mathematical Reasoning Abilities of Neural Models pits a neural network against a high school mathematics test with surprising results: the AI failed.
(Synced) / (DeepMind paper)

The Google MLIR Team Releases MLIR Core as Open Source: A New Multi-Level IR Compiler Framework
The MLIR project aims to define a common intermediate representation (IR) that will unify the infrastructure required to execute high performance machine learning models in TensorFlow and similar ML frameworks. This project will include the application of HPC techniques, along with integration of search algorithms like reinforcement learning.

Open-Sourcing Habana Back End for Glow
The first experimental back end for the Glow compiler and runtime project, designed to target Habana’s existing hardware accelerator. This back end is the first to customize for various vendors’ accelerators.
(Facebook Code) 


Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks
In this paper, researchers present Randomizedto-Canonical Adaptation Networks (RCANs), a novel approach to crossing the visual reality gap that uses no realworld data. Their method learns to translate randomized rendered images into their equivalent non-randomized, canonical versions.
(Imperial College London & X & Google Brain & DeepMind & UC Berkeley)

Unsupervised Learning by Competing Hidden Units
In the present paper researchers propose an unusual learning rule, which has a degree of biological plausibility and which is motivated by Hebb’s idea that change of the synapse strength should be local and postsynaptic neurons. They design a learning algorithm that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way.
(MIT & IBM Watson & Institue for Advanced Study & Princeton Neuroscience Institute)

Preferred Networks Creative Project: PFN Releases Crypko Technology, A Character Generation Platform
Preferred Networks, Inc. (PFN, Head Office: Tokyo, President & CEO: Toru Nishikawa) will start providing Crypko™️ technology, a character generation platform, as part of the PFN Creative Project.
(Preferred Networks) 

You May Also Like

Father of GANs Ian Goodfellow Splits Google For Apple
Ian Goodfellow — the research scientist who pioneered generative adversarial networks (GANs) — has left Google Brain and joined Apple to direct a special machine learning project, according to his Linkedln profile updated today and CNBC.

Is the Fashion World Ready for AI-Designed Dresses?
Pinar Yanardag and Emily Salvador recently launched their fashion brand, aiming to reimagine dress design with the help of an artificial intelligence technique called Generative Adversarial Networks (GANs).

Global AI Events

April 9-11, Google Cloud Next in San Francisco, United States

April 11, Applied Machine Learning Conference in Charlottesville, United States

April 15-17, – Applied AI Software Conference for Developers in San Francisco, United States

April 15-18, Artificial Intelligence Conference in New York, United States

Global AI Opportunities

0 comments on “Google Reduces BERT Pre-Training Time; DeepMind AI Fails High School Math Test

Leave a Reply

Your email address will not be published.

%d bloggers like this: