Google Open-Sources GPipe Library for Training Large-Scale Neural Network Models
oogle this week introduced GPipe, an open-source library that dramatically improves training efficacy for large-scale neural network models.
AI Technology & Industry Review
oogle this week introduced GPipe, an open-source library that dramatically improves training efficacy for large-scale neural network models.
The organizers of NeurIPS (Conference on Neural Information Processing Systems) today announced the dates and other information regarding NeurIPS 2019.
The Conference on Computer Vision and Pattern Recognition (CVPR) is one of the world’s top computer vision (CV) conferences. CVPR 2019 runs June 15 through June 21 in Long Beach, California, and the list of accepted papers for the prestigious gathering has now been released.
Microsoft Research Asia and University of Science and Technology of China have jointly released a new human pose estimation model which has set records on three COCO benchmarks.
Facebook AI Infrastructure Director Yangqing Jia is leaving his position with the company, a person familiar with the matter told Synced. The Facebook team confirmed his departure yesterday.
10 AI News You Must Know From February W3 – W4
Machine learning models based on deep neural networks have achieved unprecedented performance on many tasks. These models are generally considered to be complex systems and difficult to analyze theoretically. Also, since it’s usually a high-dimensional non-convex loss surface which governs the optimization process, it is very challenging to describe the gradient-based dynamics of these models during training.
Synced Global AI Weekly March 3rd
Beijing winters can be devastating on feral cats, with studies suggesting only about 40 percent make it through the long stretch of cold and harsh weather. A Baidu AI engineer who goes by the alias “Wan’xi” (晚兮) set out to make a difference for vulnerable neighbourhood kitties, and the result is an AI-powered smart shelter system.
Every year as the calendar turns from February to March, the world’s leading electronics and telecommunications companies, startups, inventors, and a herd of tech journalists and analysts head to the Mobile World Congress.
It’s not uncommon to find scalpers at concert halls and sporting events, hawking admission tickets at inflated prices. Chinese hospitals however have been invaded by a more problematic breed of scalper — those who deal in the appointment registration tokens that hospitals use to process patient visits.
The Conference on Computer Vision and Pattern Recognition (CVPR) announced this week they have accepted 1300 research papers for CVPR 2019, which will be held June 16 – 20 in Long Beach, California. This year’s submission and acceptance totals both set records for the world’s premier computer vision conference, which had never before accepted more than 1000 papers.
As Synced previously reported, these hyperrealistic images now flooding the Internet come from US chip giant NVIDIA’s StyleGAN, a generative adversarial network based face generator that performs so well that most people can’t distinguish its creations from photos of real people.
Beijing-based AI chip startup Horizon Robotics today announced a staggering US$600 million in Series B funding led by South Korea conglomerate SK Group. The investment brings Horizon’s value to an estimated US$3 billion, making it the world’s highest-valued AI chip startup.
Having notched impressive victories over human professionals in Go, Atari Games, and most recently StarCraft 2 — Google’s DeepMind team has now turned its formidable research efforts to soccer. In a paper released last week, the UK AI company demonstrates a novel machine learning method that trains a team of AI agents to play a simulated version of “the beautiful game.”
Researchers from Beijing-based AI unicorn SenseTime and Nanyang Technological University have trained ImageNet/AlexNet in a record-breaking 1.5 minutes, a significant 2.6 times speedup over the previous record of 4 minutes.
Synced Global AI Weekly February 24th
According to a CIRP study, Amazon Alexa has a commanding 70 percent market share in the US, with rival Google Assistant taking 24 percent. A Market Research Future study predicts the voice assistant market will reach US$7.8 billion by 2023, a compound annual growth rate of almost 40 percent.
Facebook AI Chief Yann LeCun introduced his now-famous “cake analogy” at NIPS 2016: “If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning (RL).”
Last Monday US President Donald Trump signed the “American AI Initiative,” an executive order designed to spur US investment in artificial intelligence and boost the domestic AI industry. The initiative has five highlights: Investing in AI Research and Development (R&D), Unleashing AI Resources, Setting AI Governance Standards, Building the AI Workforce, International Engagement and Protecting our AI Advantage.
In its new paper Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search, Xiaomi’s research team introduces a deep convolution neural network (CNN) model using a neural architecture search (NAS) approach. Performance is comparable to cutting-edge models such as CARN and CARN-M.
Github developer Hugging Face has updated its repository with a PyTorch reimplementation of the GPT-2 language model small version that OpenAI open-sourced last week, along with pretrained models and fine-tuning examples.
Facebook AI Research (FAIR) introduced their own Go bot last year, aiming to reproduce AlphaGo Zero results using their Extensible, Lightweight Framework (ELF) for reinforcement learning research. FAIR recently added new features to ELF OpenGo and has open-sourced the project.
10 AI News You Must Know from February W 1 – W 2
There are some things that some people just don’t want showing up on their websites, and this has spawned a wide range of activities and technologies that fall under “content review.”
Synced Global AI Weekly February 17th
Synced spoke with AI pioneer Professor Yoshua Bengio at the Computing in the 21st Century Conference in Beijing, where he discussed his recent research and the current state of AI.
Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. (Read the previous articles on Clarifai CEO Matt Zeiler and Google Brain Researcher Quoc Le.)
In 2017 Google introduced Federated Learning (FL), “a specific category of distributed machine learning approaches which trains machine learning models using decentralized data residing on end devices such as mobile phones.” A new Google paper has now proposed a scalable production system for federated learning to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.
The San Francisco-based AI non-profit however has raised eyebrows in the research community with its unusual decision to not release the language model’s code and training dataset. In a statement sent to Synced, OpenAI explained the choice was made to prevent malicious use: “it’s clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse.”
Uber has unveiled Ludwig, a new TensorFlow-based toolkit that enables users to train and test deep learning models without writing any code. The toolkit will help non-experts understand models and accelerate their iterative development by simplifying the prototyping process and data processing.
CUHK researchers recently teamed up with Chinese AI giant SenseTime to develop a greatly improved iteration in DeepFashion2, a large-scale benchmark with comprehensive tasks and annotations of fashion image understanding.
Debating is a hallmark of human civilization, and few do it better than World Debating Championship Finalist Harish Natarajan. Last night at Yerba Buena Center for the Arts in San Francisco, Natarajan stepped up against an AI-empowered debating machine from IBM.
Welcome to the Year of the Pig! Lunar New Year is China’s biggest holiday, with this year’s celebrations picking up during the “Little Year” period in late January, peaking February 4 for New Year’s Eve, and continuing through February 19.
Synced Global AI Weekly February 10th
In December Synced reported on a hyperrealistic face generator developed by US chip giant NVIDIA. The GAN-based model performs so well that most people can’t distinguish the faces it generates from real photos. This week NVIDIA announced that it is open-sourcing the nifty tool, which it has dubbed “StyleGAN”.
The Synced Lunar New Year Project is a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. In this second installment (click here to read the previous article on Clarifai CEO Matt Zeiler), Synced speaks with Google Brain Researcher Quoc Le on his latest invention, AutoML, Google Brain’s pursuit of AI, and the secret of transforming lab technologies into real practices.
Facebook researchers have introduced two new methods for pretraining cross-lingual language models (XLMs). The unsupervised method uses monolingual data, while the supervised version leverages parallel data with a new cross-lingual language model.
Google rang in the Lunar New Year with a couple of AI-powered treats: a new Live Transcribe service to help the deaf and hard of hearing, and a Google Doodle showcasing the ancient Chinese art of Shadow Puppetry.