Since Google Research introduced its Bidirectional Transformer (BERT) in 2018 the model has gained unprecedented popularity among researchers. Now, a group of researchers from the National Cheng Kung University Tainan in Taiwan are challenging BERT’s efficacy.
Might there be a more efficient approach to scaling up CNNs to improve accuracy? Researchers from Google AI say “yes” and have proposed a new model scaling method in their ICML 2019 paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.
Traditional methods used to estimate 3D structure and camera motion in videos rely heavily on manual assumptions such as continuity and planarity. Google researchers have now presented an alternative deep learning method which is able to obtain these assumptions from unlabelled video.
To make ML-based solutions available for a wider variety of deployment scenarios, Waymo’s autonomous driving team has collaborated with Google AI Brain Team researchers on a system that automates the creation of high quality and low latency neural networks on existing AutoML architectures.
Google AI lead Jeff Dean recently posted a link to his 1990 senior thesis on Twitter, which set off a wave of nostalgia for the early days of machine learning in the AI community. Parallel Implementation of Neural Network Training: Two Back-Propagation Approaches may be almost 30 years old and only eight pages long, but the paper does a remarkable job of explaining the methods behind neural network training and the modern development of artificial intelligence.
Over the past three months, criticism and protests have been mounting over Google’s participation in Project Maven, a Pentagon pilot program to build machine learning models to detect and categorize objects in drone footage provided by the US Department of Defense.