Category: AI

Global machine intelligence updates.

AI Machine Learning & Data Science Research

Only Train Once: SOTA One-Shot DNN Training and Pruning Framework

A research team from Microsoft, Zhejiang University, Johns Hopkins University, Georgia Institute of Technology and University of Denver proposes Only-Train-Once (OTO), a one-shot DNN training and pruning framework that produces a slim architecture from a full heavy model without fine-tuning while maintaining high performance.

AI Machine Learning & Data Science Research

Baidu’s Knowledge-Enhanced ERNIE 3.0 Pretraining Framework Delivers SOTA NLP Results, Surpasses Human Performance on the SuperGLUE Benchmark

A research team from Baidu proposes ERNIE 3.0, a unified framework for pretraining large-scale, knowledge-enhanced models that can easily be tailored for both natural language understanding and generation tasks with zero-shot learning, few-shot learning or fine-tuning, and achieves state-of-the-art results on NLP tasks.

AI Machine Learning & Data Science Research

New Study Proposes Quantum Belief Function, Achieves Exponential Time Acceleration

A research team from the University of Electronic Science and Technology of China, Chinese Academy of Sciences, School of Education Shaanxi Normal University, Japan Advanced Institute of Science and Technology and ETH Zurich encodes the basic belief assignment (BBA) into quantum states and implements them on a quantum circuit, aiming to utilize quantum computation characteristics to better handle belief functions.

AI Machine Learning & Data Science Research

Two Lines of Code to Use a 2080Ti to Achieve What Was Previously Only Possible on a V100

As the dynamic computational graph is widely supported by many machine learning frameworks, GPU memory utilization for training on a dynamic computational graph becomes a key specification of these frameworks. In the recently released v1.4, MegEngine provides a way to reduce the GPU memory usage by additional computation using Dynamic Tensor Rematerialization (DTR) technique and further engineering optimization, which makes large batch size training on a single GPU possible.

AI Asia Global News

Tencent’s 7 Billion Dollar AI Supercomputing Center In the Yangtze River Delta Commence Operation

At the World Artificial Intelligence Conference (WAIC) held in Shanghai on July 9, Daosheng Tang, the senior vice exectuvie of Tencent and president of the Tencent cloud and smart industry group, said that the company’s Yangtze River AI Supercomputing Center with RMB 45 billion (approx. USD 7 billion) investment will soon commence operation.

AI Computer Vision & Graphics Machine Learning & Data Science Popular Research

Facebook & UC Berkeley Substitute a Convolutional Stem to Dramatically Boost Vision Transformers’ Optimization Stability

A research team from Facebook AI and UC Berkeley finds a solution for vision transformers’ optimization instability problem by simply using a standard, lightweight convolutional stem for ViT models. The approach dramatically increases optimizer stability and improves peak performance without sacrificing computation efficiency.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Video Swin Transformer Improves Speed-Accuracy Trade-offs, Achieves SOTA Results on Video Recognition Benchmarks

A research team from Microsoft Research Asia, University of Science and Technology of China, Huazhong University of Science and Technology, and Tsinghua University takes advantage of the inherent spatiotemporal locality of videos to present a pure-transformer backbone architecture for video recognition that leads to a better speed-accuracy trade-off.

AI Machine Learning & Data Science Research

New Milestone for Deep Potential Application: Predicting the Phase Diagram of Water

A research team from Princeton University, the Institute of Applied Physics and Computational Mathematics and the Beijing Institute of Big Data Research uses the Deep Potential (DP) method to predict the phase diagram of water from ab initio quantum theory, from low temperature and pressure to about 2400 K and 50 GPa. The paper was published in leading physics journal Physical Review Letters and represents an important milestone in the application of DP.

AI Asia Global News

South Korea’s LG Electronics Rolls Out AI-Powered Digital X-Ray Detector

On June 22, LG Electronics announced the launch of a new “digital x-ray detector” (DXD). The new product is equipped with assisted AI diagnostic functions, which are designed by healthcare AI solutions company VUNO. The product will detect chest X-ray images for abnormal findings and enhance lesion areas with coloring and outline, to help medical professionals accurately identify lung diseases including tuberculosis, pneumonia, and cancer.

AI Machine Learning & Data Science Nature Language Tech Research

Google Researchers Merge Pretrained Teacher LMs Into a Single Multilingual Student LM Via Knowledge Distillation

A Google Research team proposes MergeDistill, a framework for merging pretrained teacher LMs from multiple monolingual/multilingual LMs into a single multilingual task-agnostic student LM to leverage the capabilities of the powerful language-specific LMs while still being multilingual and enabling positive language transfer.

AI Machine Learning & Data Science Research

Pieter Abbeel Team’s Decision Transformer Abstracts RL as Sequence Modelling

A research team from UC Berkeley, Facebook AI Research and Google Brain abstracts Reinforcement Learning (RL) as a sequence modelling problem. Their proposed Decision Transformer simply outputs optimal actions by leveraging a causally masked transformer, yet matches or exceeds state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.