According to a CIRP study, Amazon Alexa has a commanding 70 percent market share in the US, with rival Google Assistant taking 24 percent. A Market Research Future study predicts the voice assistant market will reach US$7.8 billion by 2023, a compound annual growth rate of almost 40 percent.
Facebook AI Chief Yann LeCun introduced his now-famous “cake analogy” at NIPS 2016: “If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).”
Last Monday US President Donald Trump signed the “American AI Initiative,” an executive order designed to spur US investment in artificial intelligence and boost the domestic AI industry. The initiative has five highlights: Investing in AI Research and Development (R&D), Unleashing AI Resources, Setting AI Governance Standards, Building the AI Workforce, International Engagement and Protecting our AI Advantage.
In its new paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search, Xiaomi’s research team introduces a deep convolution neural network (CNN) model using a neural architecture search (NAS) approach. Performance is comparable to cutting-edge models such as CARN and CARN-M.
Facebook AI Research (FAIR) introduced their own Go bot last year, aiming to reproduce AlphaGo Zero results using their Extensible, Lightweight Framework (ELF) for reinforcement learning research. FAIR recently added new features to ELF OpenGo and has open-sourced the project.
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. (Read the previous articles on Clarifai CEO Matt Zeiler and Google Brain Researcher Quoc Le.)
In 2017 Google introduced Federated Learning (FL), “a specific category of distributed machine learning approaches which trains machine learning models using decentralized data residing on end devices such as mobile phones.” A new Google paper has now proposed a scalable production system for federated learning to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.
The San Francisco-based AI non-profit however has raised eyebrows in the research community with its unusual decision to not release the language model’s code and training dataset. In a statement sent to Synced, OpenAI explained the choice was made to prevent malicious use: “it’s clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse.”
Uber has unveiled Ludwig, a new TensorFlow-based toolkit that enables users to train and test deep learning models without writing any code. The toolkit will help non-experts understand models and accelerate their iterative development by simplifying the prototyping process and data processing.
In December Synced reported on a hyperrealistic face generator developed by US chip giant NVIDIA. The GAN-based model performs so well that most people can’t distinguish the faces it generates from real photos. This week NVIDIA announced that it is open-sourcing the nifty tool, which it has dubbed “StyleGAN”.
The Synced Lunar New Year Project is a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this second installment (click here to read the previous article on Clarifai CEO Matt Zeiler), Synced speaks with Google Brain Researcher Quoc Le on his latest invention, AutoML, Google Brain’s pursuit of AI, and the secret of transforming lab technologies into real practices.
Uber AI Lab has created a buzz in the machine learning community with the publication of a paper introducing a new reinforcement learning algorithm called Go-Explore. The algorithm is designed to overcome the challenges of intelligence exploration in reinforcement learning to improve performance on hard-exploration tasks.
Last December some 9,000 attendees packed a single venue in Montreal for a week-long academic conference. NeurIPS was completely sold out, the latest indication of just how hot AI is nowadays. As AI and machine learning continue to ignite discussion across a wide variety of disciplines, novel approaches to the tech are also garnering interest.
This is the first installment of the Synced Lunar New Year Project, a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this article, Synced chats with Clarifai Founder and CEO Matt Zeiler on recent progress in computer vision and his company’s plans for the future. Founded in New York in 2013, Clarifai produces advanced image recognition systems.
Papers With Code is a unique and useful resource that presents trending ML research along with the code to implement it. The site was created by Atlas ML CEO Robert Stojnic, aka “rstoj” on Reddit’s machine learning board. The latest version of Papers With Code has added 950+ unique machine learning tasks, 500+ State-of-the-Art result leaderboards and 8500+ papers with code.
There’s a lot more to a friendly game of Jenga than meets the eye. Strategies are informed by a complex set of tactile and visual stimuli — by touching a block and observing the tower, we not only see but also feel our actions and their consequences. The MIT Jenga robot thus marks an important step in AI’s transition to the physical world.
In his 1988 IEEE paper Cellular Neural Networks: Theory, UC Berkeley PhD student Lin Yang proposed Cellular Neural Network theory, a predecessor of the Convolutional Neural Networks (CNN) that would later revolutionize machine learning. Based on this theory, Yang blueprinted a 20*20 parallel simulated circuit chip in the university lab.