EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Google Brain researchers propose a model scaling method that uses a compound coefficient to scale up CNNs in a more structured manner. Powered by this novel scaling method and recent progress on AutoML, they have developed a family of models, called EfficientNets, which superpass state-of-the-art accuracy with up to 10x better efficiency.
Geoffrey Hinton Leads Google Brain Representation Similarity Index Research Aiming to Understand Neural Networks
A Google Brain research team led by Turing Award recipient Geoffrey Hinton recently published a paper that presents an effective method for measuring the similarity of representations learned by deep neural networks.
(Synced) / (Google Brain)
Cold Case: The Lost MNIST Digits
Researchers propose a reconstruction to serve as a replacement for the MNIST dataset, with insignificant changes in accuracy. They trace each MNIST digit to its NIST source and its rich metadata such as writer identifier, partition identifier, etc. They also reconstruct the complete MNIST test set with 60,000 samples instead of the usual 10,000.
(New York University & Facebook AI)
DeepMind Proposes a Novel Way to Improve GANs Using Gradient Information
A group of researchers from DeepMind have introduced a novel framework that significantly improves signal recovery performance and speed. The approach involves jointly training a generator and the optimization process for reconstruction via meta-learning.
(Synced) / (DeepMind)
An Explicitly Relational Neural Network Architecture
In order to evaluate and analyse the architecture, researchers introduce a family of simple visual relational reasoning tasks of varying complexity. They show that the proposed architecture learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures.
(DeepMind & Imperial College London)
Parallel Neural Text-to-Speech
Baidu researchers propose a non-autoregressive seq2seq model that converts text to spectrograms. It is fully convolutional and obtains about 17.5 times speed-up over Deep Voice 3 at synthesis while maintaining comparable speech quality using a WaveNet vocoder.
Fair Is Better Than Sensational: Man Is to Doctor as Woman Is to Doctor
In this work researchers show that embedding spaces have not been treated fairly. Through a series of simple experiments, they highlight practical and theoretical problems in previous works, and demonstrate that some of the most widely used biased analogies are in fact not supported by the data.
(University of Groningen)
You May Also Like
ML Community Raises Inclusivity Concerns After IEEE Bars Huawei Paper Reviewers
The world’s biggest technical professional organization, the Institute of Electrical and Electronics Engineers (IEEE) issued a statement on May 22 that forbids its colleagues from Huawei and 68 of its affiliates from reviewing or accessing non-public papers submitted by other persons for publications.
Ask AI: Is Bob Dylan an Author or a Songwriter?
Google’s pretrained model BERT has become one of the hottest AI tools for Natural Language Processing. Although BERT has an unprecedented ability to capture rich semantic meanings from plain text, it’s not quite perfect. For example if you ask the question “Is Bob Dylan a songwriter or a book author?” BERT’s pursuit of a response becomes as tangled as Dylan’s hair.
Global AI Events
June 4-7: Amazon re:MARS in Las Vegas, United States
June 10-12:World Conference on Robotics and AI (WCRAI) in Osaka, Japan
June 15-21: Computer Vision and Pattern Recognition in Long Beach, United States
June 20-21: AI for Good Summit in San Francisco, United States
Global AI Opportunities