Current state-of-the-art convolutional architectures for object detection tasks are human-designed. In a recent paper, Google Brain researchers leveraged the advantages of Neural Architecture Search (NAS) to propose NAS-FPN, a new automatic search method for feature pyramid architecture.
TensorFlow is the world’s most popular open source machine learning library. Since its initial release in 2015, the Google Brain product has been downloaded over 41 million times. At this week’s 2019 TensorFlow Dev Summit, Google announced a major upgrade on the framework, the TensorFlow 2.0 Alpha version.
Machine learning models based on deep neural networks have achieved unprecedented performance on many tasks. These models are generally considered to be complex systems and difficult to analyze theoretically. Also, since it’s usually a high-dimensional non-convex loss surface which governs the optimization process, it is very challenging to describe the gradient-based dynamics of these models during training.
The Synced Lunar New Year Project is a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this second installment (click here to read the previous article on Clarifai CEO Matt Zeiler), Synced speaks with Google Brain Researcher Quoc Le on his latest invention, AutoML, Google Brain’s pursuit of AI, and the secret of transforming lab technologies into real practices.
DARCCC (Detecting Adversaries by Reconstruction from Class Conditional Capsules) is a technique which uses a similarity metric to compare reconstructed images with an original input image to identify whether it was an adversarial image, and further detects whether the system was attacked.
A founding member of Google Brain and the mind behind AutoML, Quoc Le is an AI natural: he loves machine learning and loves automating things. Le used millions of YouTube thumbnails to develop an unsupervised learning system that recognized cats when he was a Stanford University PhD in 2011.
Google is looking to expand its AI research activities in the Japanese capital. The company’s deep learning and AI research team Google Brain yesterday posted a “Tokyo job listing seeking talented experts to participate in cutting edge research on machine learning”.
Google has announced the release of MusicVAE, a machine learning model that makes composing musical scores as easy as mixing paint on a palette. A breakthrough from Google Brain’s Magenta Project, MusicVAE generates and morphs melodies to output multi-instrumental passages optimized for expression, realism and smoothness which sound convincingly like human-composed music.