Latest Posts

AI Machine Learning & Data Science Research

UC Berkeley’s Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions

In the new paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine argues that a general, principled, and powerful framework for utilizing unlabelled data can be derived from reinforcement learning to enable machine learning systems leveraging large datasets to understand the real world.

AI Machine Learning & Data Science Research

Integrating Self-Attention and Convolution: Tsinghua, Huawei & BAAI’s ACmix Achieves SOTA Performance on CV Tasks With Minimum Cost

In the new paper On the Integration of Self-Attention and Convolution, a research team from Tsinghua University, Huawei Technologies Ltd. and the Beijing Academy of Artificial Intelligence proposes ACmix, a mixed model that leverages the benefits of both self-attention and convolution for computer vision representation tasks while achieving minimum computational overhead compared to its pure convolution or self-attention counterparts.

AI Machine Learning & Data Science Research

Warsaw U, Google & OpenAI’s Terraformer Achieves a 37x Speedup Over Dense Baselines on 17B Transformer Decoding

In the new paper Sparse is Enough in Scaling Transformers, a research team from the University of Warsaw, Google Research and OpenAI proposes Scaling Transformers, a family of novel transformers that leverage sparse layers to scale efficiently and perform unbatched decoding much faster than original transformers, enabling fast inference on long sequences even with limited memory.

AI Others Research

Time-Crystalline Study Published in Nature Journal Observes a New Phase of Matter in a Quantum Processor

A team from Google Research, Stanford University, University of Massachusetts, University of California, Columbia University, Princeton University, Max Planck Institute for the Physics of Complex Systems and University of Oxford uses a quantum processor to observe a time crystal, a new phase of matter which could be one of the most significant physical discoveries in decades.

AI Machine Learning & Data Science Popular Research

Google, Cambridge U & Alan Turing Institute Propose PolyViT: A Universal Transformer for Image, Video, and Audio Classification

A research team from Google Research, University of Cambridge and Alan Turing Institute proposes PolyViT, a single transformer model capable of processing multiple modalities and datasets. PolyViT is parameter-efficient and learns representations that generalize across multiple domains.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Microsoft’s ‘Florence’ General-Purpose Foundation Model Achieves SOTA Results on Dozens of CV Benchmarks

In the paper A New Foundation Model for Computer Vision, a Microsoft research team proposes Florence, a novel foundation model for computer vision that significantly outperforms previous large-scale pretraining approaches and achieves new SOTA results across a wide range of visual and visual-linguistic benchmarks.

AI Machine Learning & Data Science Research

Kwai, Kuaishou & ETH Zürich Propose PERSIA, a Distributed Training System That Supports Deep Learning-Based Recommenders of up to 100 Trillion Parameters

A research team from Kwai Inc., Kuaishou Technology and ETH Zürich builds PERSIA, an efficient distributed training system that leverages a novel hybrid training algorithm to ensure both training efficiency and accuracy for extremely large deep learning recommender systems of up to 100 trillion parameters.

AI Machine Learning & Data Science Research

SPANN: A Highly-Efficient Billion-Scale Approximate Nearest Neighbour Search That’s 2× Faster Than the SOTA Method

A research team from Microsoft, Peking University, Tencent, and Baidu proposes SPANN, a simple but efficient memory-disk hybrid vector indexing and search system that guarantees both low latency and high recall and achieves a 2× speedup over the state-of-the-art nearest neighbour search (ANNS) solution while retaining the same recall quality and memory cost.

AI Machine Learning & Data Science Research

Is BERT the Future of Image Pretraining? ByteDance Team’s BERT-like Pretrained Vision Transformer iBOT Achieves New SOTAs

A research team from ByteDance, Johns Hopkins University, Shanghai Jiao Tong University and UC Santa Cruz seeks to apply the proven technique of masked language modelling to the training of better vision transformers, presenting iBOT (image BERT pretraining with Online Tokenizer), a self-supervised framework that performs masked prediction with an online tokenizer.

AI Machine Learning & Data Science Research

Google Brain & Radboud U ‘Dive Into Chaos’ to Show Gradients Are Not All You Need in Dynamical Systems

In the new paper Gradients Are Not All You Need, a Google Brain and Radboud University research team discusses a “particularly sinister” chaos-based failure mode that appears in a variety of differentiable circumstances, ranging from recurrent neural networks and numerical physics simulation to training learned optimizers.

AI Machine Learning & Data Science Research

Can ViT Layers Express Convolutions? Peking U, UCLA & Microsoft Researchers Say ‘Yes’

In the new paper Can Vision Transformers Perform Convolution?, a research team from Peking University, UCLA and Microsoft Research proves that a single ViT layer with image patches as the input can perform any convolution operation constructively, and show that ViT performance in low data regimes can be significantly improved using their proposed ViT training pipeline.

AI Machine Learning & Data Science Nature Language Tech Research

Introducing MetaICL: A Language Model Meta-Training Framework for Few-Shot In-Context Learning

A research team from the University of Washington, Facebook AI Research and the Allen Institute for AI introduces Meta-training for InContext Learning (MetaICL), a new meta-training framework for few-shot learning where an LM is meta-trained to learn in-context — conditioning on training examples to recover the task and make predictions.

AI Machine Learning & Data Science Research

Google & UC Berkeley’s Data-Driven Offline Optimization Approach Significantly Boosts Hardware Accelerator Performance, Reduces Simulation Time by More Than 90%

A research team from Google Research and UC Berkeley proposes PRIME, an offline data-driven approach that can architect hardware accelerators without any form of simulations. Compared to state-of-the-art simulation-driven methods, PRIME achieves impressive performance improvements of up to 1.54× while reducing the total required simulation time by up to 99 percent.

AI Machine Learning & Data Science Research

Washington U & Google Study Reveals How Attention Matrices Are Formed in Encoder-Decoder Architectures

In the new paper Understanding How Encoder-Decoder Architectures Attend, researchers from the University of Washington, Google Blueshift Team and Google Brain Team propose a method for decomposing hidden states over a sequence into temporal- and input-driven components, revealing how attention matrices are formed in encoder-decoder networks.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Softmax-free Vision Transformer With Linear Complexity: Achieving a Superior Accuracy/Complexity Trade-off

Researchers from Fudan University, University of Surrey and Huawei Noah’s Ark Lab identify the limitations of quadratic complexity for vision transformers (ViTs) as rooted in keeping the softmax self-attention during approximations. The team proposes the first softmax-free transformer (SOFT), which reduces the self-attention computation to linear complexity, achieving a superior trade-off between accuracy and complexity.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Google Open-Sources SCENIC: A JAX Library for Rapid Computer Vision Model Prototyping and Cutting-Edge Research

A research team from Google Brain and Google Research introduces SCENIC, an open-source JAX library for fast and extensible computer vision research and beyond. JAX currently supports implementations of state-of-the-art vision models such as ViT, DETR and MLP Mixer, and more open-sourced cutting-edge projects will be added in the near future.