In the new paper AutoDIME: Automatic Design of Interesting Multi-Agent Environments, an OpenAI research team explores automatic environment design for multi-agent environments using an RL-trained teacher that samples environments to maximize student learning. The work demonstrates that intrinsic teacher rewards are a promising approach for automating both single and multi-agent environment design.
A research team from UC Berkeley, Amazon Web Services, Google, Shanghai Jiao Tong University and Duke University proposes Alpa, a compiler system for distributed deep learning on GPU clusters that automatically generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on the models they were designed for.
University of Illinois Urbana-Champaign and Google researchers introduce AutoDistill, an end-to-end fully automated model distillation framework that integrates model architecture exploration and multi-objective optimization for building hardware-efficient pretrained natural language processing models.
To help users design and tune machine learning models, neural network architectures or complex system parameters in an efficient and automatic way, in 2017 Microsoft Research began developing its Neural Network Intelligence (NNI) AutoML toolkit, open-sourcing v1.0 version in 2018.
Might there be a more efficient approach to scaling up CNNs to improve accuracy? Researchers from Google AI say “yes” and have proposed a new model scaling method in their ICML 2019 paper EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.
Automated machine learning (AutoML) is a hot topic in artificial intelligence. Researchers from German digital and software company USU Software AG and the University of Stuttgart recently published a review paper summarizing the latest academic and industrial developments in AutoML.
Designing accurate and efficient CNNs for mobile devices is challenging due to the large design space and expensive computational methods. Although many mobile CNNs are available for developers to train and deploy to mobile devices, existing CNN architecture may not be able to achieve the best results for some tasks on mobile devices.
The Synced Lunar New Year Project is a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this second installment (click here to read the previous article on Clarifai CEO Matt Zeiler), Synced speaks with Google Brain Researcher Quoc Le on his latest invention, AutoML, Google Brain’s pursuit of AI, and the secret of transforming lab technologies into real practices.
To make ML-based solutions available for a wider variety of deployment scenarios, Waymo’s autonomous driving team has collaborated with Google AI Brain Team researchers on a system that automates the creation of high quality and low latency neural networks on existing AutoML architectures.
At the annual Google Cloud Next conference which kicked off July 24 in San Francisco the company unveiled a series of AI-based product releases and enhancements for its analytics and machine learning tools, additional applications on G Suite, and new IoT products.