Synced spoke with AI pioneer Professor Yoshua Bengio at the Computing in the 21st Century Conference in Beijing, where he discussed his recent research and the current state of AI.
Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. (Read the previous articles on Clarifai CEO Matt Zeiler and Google Brain Researcher Quoc Le.)
In 2017 Google introduced Federated Learning (FL), “a specific category of distributed machine learning approaches which trains machine learning models using decentralized data residing on end devices such as mobile phones.” A new Google paper has now proposed a scalable production system for federated learning to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.
The San Francisco-based AI non-profit however has raised eyebrows in the research community with its unusual decision to not release the language model’s code and training dataset. In a statement sent to Synced, OpenAI explained the choice was made to prevent malicious use: “it’s clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse.”
Uber has unveiled Ludwig, a new TensorFlow-based toolkit that enables users to train and test deep learning models without writing any code. The toolkit will help non-experts understand models and accelerate their iterative development by simplifying the prototyping process and data processing.
Debating is a hallmark of human civilization, and few do it better than World Debating Championship Finalist Harish Natarajan. Last night at Yerba Buena Center for the Arts in San Francisco, Natarajan stepped up against an AI-empowered debating machine from IBM.
In December Synced reported on a hyperrealistic face generator developed by US chip giant NVIDIA. The GAN-based model performs so well that most people can’t distinguish the faces it generates from real photos. This week NVIDIA announced that it is open-sourcing the nifty tool, which it has dubbed “StyleGAN”.
The Synced Lunar New Year Project is a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this second installment (click here to read the previous article on Clarifai CEO Matt Zeiler), Synced speaks with Google Brain Researcher Quoc Le on his latest invention, AutoML, Google Brain’s pursuit of AI, and the secret of transforming lab technologies into real practices.
Uber AI Lab has created a buzz in the machine learning community with the publication of a paper introducing a new reinforcement learning algorithm called Go-Explore. The algorithm is designed to overcome the challenges of intelligence exploration in reinforcement learning to improve performance on hard-exploration tasks.
Last December some 9,000 attendees packed a single venue in Montreal for a week-long academic conference. NeurIPS was completely sold out, the latest indication of just how hot AI is nowadays. As AI and machine learning continue to ignite discussion across a wide variety of disciplines, novel approaches to the tech are also garnering interest.
This is the first installment of the Synced Lunar New Year Project, a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this article, Synced chats with Clarifai Founder and CEO Matt Zeiler on recent progress in computer vision and his company’s plans for the future. Founded in New York in 2013, Clarifai produces advanced image recognition systems.
Papers With Code is a unique and useful resource that presents trending ML research along with the code to implement it. The site was created by Atlas ML CEO Robert Stojnic, aka “rstoj” on Reddit’s machine learning board. The latest version of Papers With Code has added 950+ unique machine learning tasks, 500+ State-of-the-Art result leaderboards and 8500+ papers with code.
There’s a lot more to a friendly game of Jenga than meets the eye. Strategies are informed by a complex set of tactile and visual stimuli — by touching a block and observing the tower, we not only see but also feel our actions and their consequences. The MIT Jenga robot thus marks an important step in AI’s transition to the physical world.
In his 1988 IEEE paper Cellular Neural Networks: Theory, UC Berkeley PhD student Lin Yang proposed Cellular Neural Network theory, a predecessor of the Convolutional Neural Networks (CNN) that would later revolutionize machine learning. Based on this theory, Yang blueprinted a 20*20 parallel simulated circuit chip in the university lab.
The International Gymnastics Federation (FIG) recently approved the use of a “Judging Support System” developed by Fujitsu for a series of FIG gymnastics events in 2019. The system will be tested at the 2019 FIG World Cup Series, then officially launched for the 49th Artistic Gymnastics World Championships in Stuttgart, Germany this coming October.
DeepMind bot AlphaStar has scored a convincing 10/10 victory against pro human players in a special series of StarCraft II matches. Plucky 26 year-old Polish gamer Grzegorz “MaNa” Komincz however salvaged a bit of human pride, snatching a surprise win yesterday in a live rematch at the DeepMind and Blizzard Entertainment Starcraft II Demonstration live stream event hosted in London.
ANYmal does not have an easy life. One of the four-legged robot’s main tasks is to learn how to stand up again — no matter how many times it is kicked, pushed or otherwise tumbles to the ground. A research team from Switzerland’s ETH Zurich University trained ANYmal using reinforcement learning (RL) and published their work last Wednesday.
The internet loves those little looping action images we call GIFs. They can tell a short visual story in a small file size that has high portability. The visual quality of GIFs is however usually low compared to the videos they were sourced from. If you are sick of fuzzy, low resolution GIFs, then researchers from Stony Brook University, UCLA, and Megvii Research have just the thing for you: “the first learning-based method for enhancing the visual quality of GIFs in the wild.”