It is no secret that deep neural networks (DNNs) can achieve state-of-the-art performance in a wide range of complicated tasks. DNN models such as BigGAN, BERT, and GPT 2.0 have proved the high potential of deep learning. Deploying DNNs on mobile devices, consumer devices, drones and vehicles however remains a bottleneck for researchers.
GTC 2019 runs next Monday through Thursday (March 18 — 21), and while we can only speculate what surprises NVIDIA CEO Jensen Huang might have in store for us, we can get some sense of where the company is headed by looking at what it’s been up to for the last 12 months.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
A paper recently accepted for ICLR 2019 challenges this with a novel optimizer — AdaBound — that authors say can train machine learning models “as fast as Adam and as good as SGD.” Basically, AdaBound is an Adam variant that employs dynamic bounds on learning rates to achieve a gradual and smooth transition to SGD.
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. (Read the previous articles on Clarifai CEO Matt Zeiler and Google Brain Researcher Quoc Le.)
Uber has unveiled Ludwig, a new TensorFlow-based toolkit that enables users to train and test deep learning models without writing any code. The toolkit will help non-experts understand models and accelerate their iterative development by simplifying the prototyping process and data processing.
Reinforcement learning (RL) has been making spectacular achievements, e.g., Atari games, AlphaGo, AlphaGo Zero, AlphaZero, DeepStack, Libratus, OpenAI Five, Dactyl, DeepMimic, Catch The Flag, learning to dress, data center cooling, chemical syntheses, drug design, etc. See more RL applications.
In 2016 Google’s DeepMind stunned the world when their Go computer AlphaGo secured a historic victory over Korean grandmaster Lee Sedol. Yesterday the UK’s top AI team delivered their latest “wow moment” as their AI system AlphaFold topped the Critical Assessment of Structure Prediction (CASP) competition.
Japanese global trading giant Mitsui & Co. and leading deep learning startup Preferred Networks (PFN) have announced a joint venture in the US to provide Biomedical/Healthcare Solutions, including Cancer Diagnostic Services, based on deep learning technology.
DARCCC (Detecting Adversaries by Reconstruction from Class Conditional Capsules) is a technique which uses a similarity metric to compare reconstructed images with an original input image to identify whether it was an adversarial image, and further detects whether the system was attacked.
Deep Learning has become an essential toolbox which is used in a wide variety of applications, research labs, industries, etc. In this tutorial given at NIPS 2017, the speakers provide a set of guidelines which will help newcomers to the field understand the most recent and advanced models and their application to diverse data modalities.
One of the top minds in machine learning, Andrew Ng is having an increasingly profound impact on AI education. Ng’s machine learning course at Stanford University remains the most popular on Coursera, the world-leading online education platform he co-founded in 2012.
DeepMind announced today that it has opened its Graph Nets (GN) library to the public, enabling the use of graph networks in TensorFlow and Sonnet. Graph Nets is a machine learning framework that was published by DeepMind, Google Brain, MIT and University of Edinburgh on Jun 15.
Founded in 1999, Tokyo-based DeNA has developed popular platforms and services for gaming, E-commerce, automotive, healthcare and entertainment content distribution. As AI continues transforming all things digital, DeNA is expanding its deep learning tech capabilities to support R&D on new techniques.
Last month’s ReWork Deep Learning Summit in London provided a peek at current recent research progress and future trends in artificial intelligence technologies. The two-day event featured top scientists and engineers from Facebook, MIT Media lab, DeepMind and other leading institutes.
The computational power of smartphones and tablets has skyrocketed to the point where they approach the level of desktop computers on the market not long ago. Although it’s easy for mobile devices to run all the standard smartphone apps, today’s artificial intelligence algorithms can be too compute-heavy for even high-end devices to handle.
UC Berkeley researchers have published a paper demonstrating how Deep Reinforcement Learning can be used to control dexterous robot hands for complicated tasks. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations proposes a low-cost and high-efficiency control method that uses demonstration and simulation techniques to accelerate the learning process.
Nadja Rhodes is enamoured with artificial intelligence. A Seattle-based Microsoft software developer unpracticed in AI techniques such as deep learning, Rhodes had applied to a number of tech company sponsored AI residency initiatives, but to no avail. And so she was thrilled to be accepted by OpenAI Scholars.