Robots Vs Robocalls
Robots pitching politics, charities, sales, surveys, and scams of every stripe and in every tongue. Increasingly that’s the voice on the other end of a global citizen’s next incoming telephone call.
AI Technology & Industry Review
Robots pitching politics, charities, sales, surveys, and scams of every stripe and in every tongue. Increasingly that’s the voice on the other end of a global citizen’s next incoming telephone call.
In a scene that looks like it’s from a sci-fi movie, a YouTube video posted today by robotics company Boston Dynamics shows a huge, ostrich-like robot “Handle” whirling round while deftly moving boxes in a warehouse. The video has garnered over 138,000 views in less than four hours.
Andrew Brock, first author of the high-profile research paper Large Scale GAN Training for High Fidelity Natural Image Synthesis (aka “BigGAN”), has posted a GitHub repository of an unofficial PyTorch BigGAN implementation that requires only 4-8 GPUs to train the model.
SyncedLeg is a tool designed to help with that by mining influential keywords from the corpus with traffic data. A team of Synced interns developed the tool over an internal two-day Hackathon, naming it after their team “机器之腿” (“Machine’s leg” in Chinese).
The ACM (Association for Computing Machinery) this morning announced Geoffrey Hinton, Yann LeCun and Yoshua Bengio as its 2018 Turing Award winners.
Facing the incomplete information environment, the asynchronous neural virtual self-play (ANFSP) method allows AI to learn to generate optimal decisions in multiple virtual environments. The approach has performed well in Texas Hold’em and multiplayer FPS video games.
To provide accurate precipitation nowcasting, Beijing startup ColorfulClouds Tech is applying machine learning (ML) techniques using observed radar echo maps to generate high resolution, minute-by-minute rainfall forecasts on its ColorfulClouds Weather mobile app.
Snorkel Drybell, an experimental internal system which leverages the open-sourced Snorkel framework to harness various existing organizational knowledge resources and generate training data for web-scale machine learning models.
Baidu has released ERNIE (Enhanced Representation through kNowledge IntEgration), a new knowledge integration language representation model which outperforms Google’s state-of-the-art BERT (Bidirectional Encoder Representations from Transformers) in Chinese language tasks.
Synced Global AI Weekly March 24th
While there is nothing wrong with viewing autonomous driving as an emerging sector in the car manufacturing industry, big tech companies are also playing a central role, and have been making huge investments in automotive technologies.
Preferred Networks (PFN) is completing a new private supercomputer, MN-2, which the Japanese AI startup expects to have operational in July 2019.
NVIDIA CEO and Co-Founder Jensen Huang says a rumored next-generation GPU architecture is not a priority for the company, and that he remains optimistic about clearing the chip inventory built up for cryptocurrency mining. Huang made the remarks in a press conference Tuesday at the GPU Technology Conference (GTC) in Santa Clara.
A new dissertation from University of Pretoria Information Technology master student John Leuner has revisited the thorny question of whether machine learning methods can effectively detect sexual orientation.
It is no secret that deep neural networks (DNNs) can achieve state-of-the-art performance in a wide range of complicated tasks. DNN models such as BigGAN, BERT, and GPT 2.0 have proved the high potential of deep learning. Deploying DNNs on mobile devices, consumer devices, drones and vehicles however remains a bottleneck for researchers.
DeepMind’s Research Platform Team has open-sourced TF-Replicator, a framework that enables researchers without previous experience with the distributed system to deploy their TensorFlow models on GPUs and Cloud TPUs. The move aims to strengthen AI research and development.
10 AI News You Must Know From March W1 – W2
NVIDIA’s annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang’s two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.
Model-free reinforcement learning can be used to learn effective strategies for complex tasks such as Atari games, but it usually requires a large amount of interaction, which adds significant time and cost.
No wow moments, no bells, and no whistles. Jensen Huang has delivered some groundbreaking keynote speeches in his years at the helm of NVIDIA, but today’s was not among them.
At the NVIDIA GPU Technology Conference (GTC) which kicked off today, NVIDIA unveiled its latest image processing research effort — GauGAN, a generative adversarial network-based technique capable of transforming segmentation maps into realistic photos.
For years now, AI researchers have been leveraging game environments to train computer models to react to complicated scenarios and make decisions accordingly. In some ways, the trial-and-error process mimics how children learn about the world around them.
Synced Global AI Weekly March 17th
AI-empowered technologies such as natural language processing (NLP) are increasingly active in the labour-intensive world of call centres — concentrated offices used for sending or receiving a large volume of requests by telephone.
GTC 2019 runs next Monday through Thursday (March 18 — 21), and while we can only speculate what surprises NVIDIA CEO Jensen Huang might have in store for us, we can get some sense of where the company is headed by looking at what it’s been up to for the last 12 months.
Zion is the company’s next-generation, large-memory unified training platform; Kings Canyon is an integrated circuit optimized for AI inference; and Mount Shasta is a specialized ASIC for video transcoding.
Last October Stanford University announced plans to create an institute built for artificial intelligence research and development. Today, the school made good on its pledge, launching the Stanford Institute for Human-Centered Artificial Intelligence (Stanford HAI) with a mission “to advance AI research, education, policy, and practice to improve the human condition.”
On Sunday, March 10, an Ethiopian Airlines Boeing 737 MAX 8 aircraft en route to Nairobi crashed shortly after takeoff from Addis Ababa, killing all 157 people on board. It was the second fatal crash in six months involving Boeing 737 MAX 8 airplanes.
Google yesterday announced a new program, Seasons of Docs, that aims to make a substantive contribution to open source software development. The eight-month project will assemble a team of technical writers to work on improving documentation development for various open source projects.
A new GitHub project, PyTorch Geometric (PyG), is attracting attention across the machine learning community. PyG is a geometric deep learning extension library for PyTorch dedicated to processing irregularly structured input data such as graphs, point clouds, and manifolds.
Mask R-CNN (Regional Convolutional Neural Network) has been the state-of-the-art model for object instance segmentation since it was proposed by Facebook Research Scientist Kaiming He in 2017 and won Best Paper at ICCV the same year.
In a move that has surprised many, OpenAI today announced the creation of a new for-profit company to balance its huge expenditures into compute and AI talents. Sam Altman, the former president of Y Combinator who stepped down last week, has been named CEO of the new “capped-profit” company, OpenAI LP.
The University of California has halted all further subscriptions with one of the world’s largest scholarly publishers, Amsterdam-based Elsevier. The move follows more than six months of negotiations which failed to reach a substantial agreement on securing universal open access to UC research.
Synced Global AI Weekly March 10th
Last November Synced ran an interview with Yoshua Bengio, in which the deep learning maverick, Université de Montréal Professor and MILA Scientific Director discussed his research and commented on the current state of deep learning and AI.
TensorFlow is the world’s most popular open source machine learning library. Since its initial release in 2015, the Google Brain product has been downloaded over 41 million times. At this week’s 2019 TensorFlow Dev Summit, Google announced a major upgrade on the framework, the TensorFlow 2.0 Alpha version.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
A paper recently accepted for ICLR 2019 challenges this with a novel optimizer — AdaBound — that authors say can train machine learning models “as fast as Adam and as good as SGD.” Basically, AdaBound is an Adam variant that employs dynamic bounds on learning rates to achieve a gradual and smooth transition to SGD.
oogle this week introduced GPipe, an open-source library that dramatically improves training efficacy for large-scale neural network models.
The organizers of NeurIPS (Conference on Neural Information Processing Systems) today announced the dates and other information regarding NeurIPS 2019.







































