To enable both content creators and end users to seriously restyle their apps’ interfaces while maintaining content detail clarity essential to their usability, researchers from Stanford have proposed ImagineNet, a novel and powerful new tool for interface customisation.
To help users design and tune machine learning models, neural network architectures or complex system parameters in an efficient and automatic way, in 2017 Microsoft Research began developing its Neural Network Intelligence (NNI) AutoML toolkit, open-sourcing v1.0 version in 2018.
DeepMind trained and tested its neural model by first collecting a dataset consisting of different types of mathematics problems. Rather than crowd-sourcing, they synthesized the dataset to generate a larger number of training examples, control the difficulty level and reduce training time.
Andrew Brock, first author of the high-profile research paper Large Scale GAN Training for High Fidelity Natural Image Synthesis (aka “BigGAN”), has posted a GitHub repository of an unofficial PyTorch BigGAN implementation that requires only 4-8 GPUs to train the model.
Facing the incomplete information environment, the asynchronous neural virtual self-play (ANFSP) method allows AI to learn to generate optimal decisions in multiple virtual environments. The approach has performed well in Texas Hold’em and multiplayer FPS video games.
Machine learning models based on deep neural networks have achieved unprecedented performance on many tasks. These models are generally considered to be complex systems and difficult to analyze theoretically. Also, since it’s usually a high-dimensional non-convex loss surface which governs the optimization process, it is very challenging to describe the gradient-based dynamics of these models during training.
In its new paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search, Xiaomi’s research team introduces a deep convolution neural network (CNN) model using a neural architecture search (NAS) approach. Performance is comparable to cutting-edge models such as CARN and CARN-M.
In his 1988 IEEE paper Cellular Neural Networks: Theory, UC Berkeley PhD student Lin Yang proposed Cellular Neural Network theory, a predecessor of the Convolutional Neural Networks (CNN) that would later revolutionize machine learning. Based on this theory, Yang blueprinted a 20*20 parallel simulated circuit chip in the university lab.
The internet loves those little looping action images we call GIFs. They can tell a short visual story in a small file size that has high portability. The visual quality of GIFs is however usually low compared to the videos they were sourced from. If you are sick of fuzzy, low resolution GIFs, then researchers from Stony Brook University, UCLA, and Megvii Research have just the thing for you: “the first learning-based method for enhancing the visual quality of GIFs in the wild.”
The computational power of smartphones and tablets has skyrocketed to the point where they approach the level of desktop computers on the market not long ago. Although it’s easy for mobile devices to run all the standard smartphone apps, today’s artificial intelligence algorithms can be too compute-heavy for even high-end devices to handle.
Google AI lead Jeff Dean recently posted a link to his 1990 senior thesis on Twitter, which set off a wave of nostalgia for the early days of machine learning in the AI community. Parallel Implementation of Neural Network Training: Two Back-Propagation Approaches may be almost 30 years old and only eight pages long, but the paper does a remarkable job of explaining the methods behind neural network training and the modern development of artificial intelligence.
Neural networks can be notoriously difficult to debug, but a Google Brain research team believes it may have come up with a novel solution. A paper by Augustus Odena and Ian Goodfellow introduces Coverage-Guided Fuzzing (CGF) methods for neural networks. The team also announced an open source software library for CGF, TensorFuzz 1.
The dearth of AI talents capable of manually designing neural architecture such as AlexNet and ResNet has spurred research in automatic architecture design. Google’s Cloud AutoML is an example of a system that enables developers with limited machine learning expertise to train high quality models. The trade-off, however, is AutoML’s high computational costs.
Last week, Singh posted a YouTube video, Making Amazon Alexa respond to Sign Language using AI, in which he smoothly communicates with Alexa using sign language. Alexa, which is installed in a laptop with a built-in camera, interprets Singh’s signed queries in real-time, converting them into text and delivering appropriate responses.