Insilico Medicine, a drug discovery startup located in Johns Hopkins University, has introduced MOSES (Molecular Sets), a platform which can be used to compare model accuracy in molecular generation. MOSES provides a standardized benchmarking dataset, a set of open-sourced models with unified implementation, and evaluation metrics.
The digital painting tool GANpaint has gone viral on social media. The product of a team of high-profile researchers from MIT, IBM, Google, and the Chinese University of Hong Kong, GAPpaint allows anyone — even those with little knowledge of digital painting or photoshop — to “paint” incredibly complex and detailed photorealistic scenes.
In 2016 Google’s DeepMind stunned the world when their Go computer AlphaGo secured a historic victory over Korean grandmaster Lee Sedol. Yesterday the UK’s top AI team delivered their latest “wow moment” as their AI system AlphaFold topped the Critical Assessment of Structure Prediction (CASP) competition.
Another video game has succumbed to the strength of artificial intelligence. Uber researchers announced yesterday that their AI has completely solved Atari’s Montezuma’s Revenge, a classic game that involves moving a character from one room to another while killing enemies and collecting jewels in a 16th century Aztec-like pyramid.
DARCCC (Detecting Adversaries by Reconstruction from Class Conditional Capsules) is a technique which uses a similarity metric to compare reconstructed images with an original input image to identify whether it was an adversarial image, and further detects whether the system was attacked.
Deep Learning has become an essential toolbox which is used in a wide variety of applications, research labs, industries, etc. In this tutorial given at NIPS 2017, the speakers provide a set of guidelines which will help newcomers to the field understand the most recent and advanced models and their application to diverse data modalities.
New research from Carnegie Mellon University, Peking University and the Massachusetts Institute of Technology shows that global minima of deep neural networks can been achieved via gradient descent under certain conditions. The paper Gradient Descent Finds Global Minima of Deep Neural Networks was published November 12 on arXiv.
Researchers from the University of California, Santa Barbara, and the University of Chicago have published a paper which identifies the risk of bad actors using smartphones’ WiFi signals to “see” through walls and surreptitiously track humans in their private rooms and offices.
If you’ve ever wondered whether Dota 2 or League of Legends is the most popular multiplayer online battle arena game, or how long you’d need to spend on a treadmill to burn off that party size bag of chips you just ate, you know that you can probably find the answer by accessing a couple of relevant information sources and then applying what seems like a natural and straightforward reasoning process.
Facebook announced today that it is open-sourcing QNNPACK, a high-performance kernel library optimized for mobile AI. The computing power of mobile devices is but a tiny fraction of that of data center servers. As such it is essential to find ways to optimize mobile devices’ hardware performance in order to run today’s compute-hungry AI applications.
Information retrieval (IR) is the activity of retrieving information from a collection of sources stored on computers, based on user queries. IR enjoys a history of one century , and serves as the heart of many ubiquitous applications such as web search, product recommendation, and personal feeds on social networks.
DeepMind announced today that it has opened its Graph Nets (GN) library to the public, enabling the use of graph networks in TensorFlow and Sonnet. Graph Nets is a machine learning framework that was published by DeepMind, Google Brain, MIT and University of Edinburgh on Jun 15.
The DeepMimic paper’s first author, Berkeley PhD student Xue Bin Peng, has now open-sourced the project’s codes, data, and frameworks. Moreover, Peng’s new research demonstrates that DeepMimic’s simulated characters can also learn to perform highly dynamic movements by using regular video clips of human examples as input data.
Founded in 1999, Tokyo-based DeNA has developed popular platforms and services for gaming, E-commerce, automotive, healthcare and entertainment content distribution. As AI continues transforming all things digital, DeNA is expanding its deep learning tech capabilities to support R&D on new techniques.
A Shanghai Jiao Tong University research team has announced the world’s first software for photonic analog quantum computing and simulation. “Feynman Photonic Analog Quantum Simulation” (FeynmanPAQS) is named after renown quantum physicist Richard P. Feynman.
UC Berkeley researchers have published a paper demonstrating how Deep Reinforcement Learning can be used to control dexterous robot hands for complicated tasks. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations proposes a low-cost and high-efficiency control method that uses demonstration and simulation techniques to accelerate the learning process.
“Best GAN samples ever yet? Very impressive ICLR submission! BigGAN improves Inception Scores by >100.” The above Tweet is from renowned Google DeepMind research scientist Oriol Vinyals. It was retweeted last week by Google Brain researcher and “Father of Generative Adversarial Networks” Ian Goodfellow, and picked up momentum and praise from AI researchers on social media.
Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of complex machine learning model Generative Adversarial Networks (GANs). Even machine learning newbs can now experiment with GAN models using only a common web browser.
In a new paper Durham University researchers introduce a anomaly detection model, GANomaly, comprising a conditional generative adversarial network that “jointly learns the generation of high-dimensional image space and the inference of latent space.” The process enables the model to perform anomaly detection tasks even in sample-poor environments.
Chinese Internet mogul Jack Ma has a flair for naming new businesses: Alibaba originates from a character made famous in the One Thousand and One Nights collection of Arabian folk tales; while the company’s R&D arm Damo Academy derives from the name of a Chinese Buddhist monk instrumental in the creation of Shaolin Kung Fu.
Google AI lead Jeff Dean recently posted a link to his 1990 senior thesis on Twitter, which set off a wave of nostalgia for the early days of machine learning in the AI community. Parallel Implementation of Neural Network Training: Two Back-Propagation Approaches may be almost 30 years old and only eight pages long, but the paper does a remarkable job of explaining the methods behind neural network training and the modern development of artificial intelligence.
Google Brain Research Scientist Ian Goodfellow has tweeted an alarm about IoT hacking of a particularly nightmarish type, after Brown University security researchers were able to remotely access and control a robot in a university research lab. The research also showed that many robotic labs worldwide may be vulnerable to such a takeover technique.