Founded in 1999, Tokyo-based DeNA has developed popular platforms and services for gaming, E-commerce, automotive, healthcare and entertainment content distribution. As AI continues transforming all things digital, DeNA is expanding its deep learning tech capabilities to support R&D on new techniques.
A Shanghai Jiao Tong University research team has announced the world’s first software for photonic analog quantum computing and simulation. “Feynman Photonic Analog Quantum Simulation” (FeynmanPAQS) is named after renown quantum physicist Richard P. Feynman.
UC Berkeley researchers have published a paper demonstrating how Deep Reinforcement Learning can be used to control dexterous robot hands for complicated tasks. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations proposes a low-cost and high-efficiency control method that uses demonstration and simulation techniques to accelerate the learning process.
“Best GAN samples ever yet? Very impressive ICLR submission! BigGAN improves Inception Scores by >100.” The above Tweet is from renowned Google DeepMind research scientist Oriol Vinyals. It was retweeted last week by Google Brain researcher and “Father of Generative Adversarial Networks” Ian Goodfellow, and picked up momentum and praise from AI researchers on social media.
Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of complex machine learning model Generative Adversarial Networks (GANs). Even machine learning newbs can now experiment with GAN models using only a common web browser.
In a new paper Durham University researchers introduce a anomaly detection model, GANomaly, comprising a conditional generative adversarial network that “jointly learns the generation of high-dimensional image space and the inference of latent space.” The process enables the model to perform anomaly detection tasks even in sample-poor environments.
Chinese Internet mogul Jack Ma has a flair for naming new businesses: Alibaba originates from a character made famous in the One Thousand and One Nights collection of Arabian folk tales; while the company’s R&D arm Damo Academy derives from the name of a Chinese Buddhist monk instrumental in the creation of Shaolin Kung Fu.
Google AI lead Jeff Dean recently posted a link to his 1990 senior thesis on Twitter, which set off a wave of nostalgia for the early days of machine learning in the AI community. Parallel Implementation of Neural Network Training: Two Back-Propagation Approaches may be almost 30 years old and only eight pages long, but the paper does a remarkable job of explaining the methods behind neural network training and the modern development of artificial intelligence.
Google Brain Research Scientist Ian Goodfellow has tweeted an alarm about IoT hacking of a particularly nightmarish type, after Brown University security researchers were able to remotely access and control a robot in a university research lab. The research also showed that many robotic labs worldwide may be vulnerable to such a takeover technique.
Electrifying an entire dance club is easy if you have killer moves like John Travolta in Saturday Night Fever. But for the rest of us, not so much. We may shake our butts and swing our arms, but let’s face it: some people just can’t dance. But now there’s hope, thanks to AI.
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. By using a generative adversarial learning framework, the method can generate high-resolution, photorealistic and temporally coherent results with various input formats, including segmentation masks, sketches, and poses.
Google Brain Software Engineer Martin Wicke says a preview version of TensorFlow 2.0 will be released later this year. To cope with dramatic changes in both users and use-cases, TensorFlow 2.0 will shift its focus to “ease of use.” Wicke made the announcements yesterday in a Google Groups post.
Artificial intelligence can now match or outperform human experts in diagnosis and referral on eye diseases, suggests a new paper from DeepMind. The UK-based, Google-owned research institute today released joint research results with the UK’s Moorfields Eye Hospital and UCL Institute of Ophthalmology, which present a new AI technique in the context of OCT imaging. The paper was published on Nature Medicine’s website.
Nvidia’s paper Large Scale Language Modeling: Converging on 40GB of Text in Four Hours introduces a model that uses mixed precision arithmetic and a 32k batch size distributed across 128 Nvidia Tesla V100 GPUs to improve scalability and transfer in Recurrent Neural Networks (RNNs) for Natural Language tasks.
Benjamin Sanchez-Lengeling from Harvard University and Alán Aspuru-Guzik from the University of Toronto have successfully applied machine learning models to speed up the materials discovery process. Their paper Inverse molecular design using machine learning: Generative models for matter engineering was published July 27 in Science Vol. 361.
The Internet is woven into our everyday lives. We access massive amounts of data through our laptops, smartphones and tablets. This free flow of information however has prompted attempts to filter content which may not be appropriate for example for young people. One such new effort from Brazil puts virtual bikinis on nudes.
The Technical University of Munich (TUM) had lured Cheng from a Japanese research institute. At TUM he founded the Institute of Cognitive Systems (ICS). With eight employees in a central office on Karlstraße 45, Cheng set to work on his arduous task: recreating the complexities of human skin and wiring it all to a brain.
Neural networks can be notoriously difficult to debug, but a Google Brain research team believes it may have come up with a novel solution. A paper by Augustus Odena and Ian Goodfellow introduces Coverage-Guided Fuzzing (CGF) methods for neural networks. The team also announced an open source software library for CGF, TensorFuzz 1.
Since 2010, the annual ImageNet Large-Scale Visual Recognition Challenge has been the most widely recognized benchmark for testing image recognition algorithms. Tencent Machine Learning picks up the challenge with its new paper Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes.
The dearth of AI talents capable of manually designing neural architecture such as AlexNet and ResNet has spurred research in automatic architecture design. Google’s Cloud AutoML is an example of a system that enables developers with limited machine learning expertise to train high quality models. The trade-off, however, is AutoML’s high computational costs.
From Hayao Miyazaki’s Spirited Away to Satoshi Kon’s Paprika, Japanese anime has made it okay for adults everywhere to enjoy cartoons again. Now, a team of Tsinghua University and Cardiff University researchers have introduced CartoonGAN — an AI-powered technology that simulates the styles of Japanese anime maestri from snapshots of real world scenery.
At the recent IEEE International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, the Best Student Paper award went to ETH Zurich Autonomous Systems Laboratory (ASL)’s Miguel de la Iglesia Valls et al. for Design of an Autonomous Racecar: Perception, State Estimation and System Integration.