Category: Machine Learning & Data Science

AI Machine Learning & Data Science Research

EPFL’s Multi-modal Multi-task Masked Autoencoder: A Simple, Flexible and Effective ViT Pretraining Strategy Applicable to Any RGB Dataset

The Swiss Federal Institute of Technology Lausanne (EPFL) presents Multi-modal Multi-task Masked Autoencoders (MultiMAE), a simple and effective pretraining strategy that enables masked autoencoding to include multiple modalities and tasks and is applicable to any RGB dataset.

AI Machine Learning & Data Science Nature Language Tech Research

Training Compute-Optimal Large Language Models: DeepMind’s 70B Parameter Chinchilla Outperforms 530B Parameter Megatron-Turing

In the new paper Training Compute-Optimal Large Language Models, a DeepMind research team posits that current large language models are significantly undertrained and, based on empirical outcomes of over 400 training runs, proposes three predictive approaches for optimally setting model size and training duration.

AI Machine Learning & Data Science Research

CMU & Google Extend Pretrained Models to Thousands of Underrepresented Languages Without Using Monolingual Data

A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to low-resource languages. Their resulting Lexicon-based Adaptation approach produces consistent performance improvements without requiring additional monolingual text.

AI Machine Learning & Data Science Nature Language Tech Research

Google, NYU & Maryland U’s Token-Dropping Approach Reduces BERT Pretraining Time by 25%

In the new paper Token Dropping for Efficient BERT Pretraining, a research team from Google, New York University, and the University of Maryland proposes a simple but effective “token dropping” technique that significantly reduces the pretraining cost of transformer models such as BERT without hurting performance on downstream fine-tuning tasks.

AI Machine Learning & Data Science Popular Research

DeepMind Proposes Symmetry-Based Representations as a Fundamental Principle for Learning Good Representations in General Intelligence

A DeepMind research team argues that the mathematical description of symmetries in group theory is an important foundation that determines the structure of the universe, constrains the nature of natural tasks, and consequently shapes both biological and artificial intelligence. The study proposes symmetry transformations as a fundamental principle for defining what makes good representations.

AI Machine Learning & Data Science Research

Google Extends Transformers for Immediate Knowledge Acquisition via a Simple New Data Read & Memorize Technique

A Google research team addresses conventional transformers’ resource-heavy training and fine-tuning requirements for learning new knowledge, proposing Memorizing Transformers as a step toward language models that can simply read and memorize new data at inference time for immediate knowledge acquisition.

AI Machine Learning & Data Science Nature Language Tech Research

Google & IDSIA’s Block-Recurrent Transformer Dramatically Outperforms Transformers Over Very Long Sequences

A team from Google Research and the Swiss AI Lab IDSIA proposes the Block-Recurrent Transformer, a novel long-sequence processing approach that has the same computation time and parameter count costs as a conventional transformer layer but achieves significant perplexity improvements in language modelling tasks over very long sequences.

AI Machine Learning & Data Science Research

Ithaca Paper Published in Nature: The First DNN Designed for Textual Restoration and Geographical and Chronological Attribution of Ancient Greek Inscriptions

A research team from DeepMind, Ca’ Foscari University of Venice, University of Oxford and Athens University of Economics and Business introduces Ithaca, a deep neural network (DNN) designed for textual restoration and geographical and chronological attribution of ancient Greek inscriptions.

AI Machine Learning & Data Science Research

Microsoft & OpenAI’s µTransfer Zero-Shot Hyperparameter Transfer Method Tunes GPT-3’s Hyperparameters on a Single GPU

In the new paper Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer, Microsoft and OpenAI researchers propose µTransfer, a method that leverages Maximal Update Parametrization (µP) to zero-shot transfer hyperparameters from small models and obtain near-optimal parameters on large models without directly tuning them.

AI Machine Learning & Data Science Research

OpenAI’s AutoDIME: Automating Multi-Agent Environment Design for RL Agents

In the new paper AutoDIME: Automatic Design of Interesting Multi-Agent Environments, an OpenAI research team explores automatic environment design for multi-agent environments using an RL-trained teacher that samples environments to maximize student learning. The work demonstrates that intrinsic teacher rewards are a promising approach for automating both single and multi-agent environment design.

AI Machine Learning & Data Science Research

DeepMind Trains AI Agents Capable of Robust Real-time Cultural Transmission Without Human Data

In the new paper Learning Robust Real-Time Cultural Transmission Without Human Data, a DeepMind research team proposes a procedure for training artificially intelligent agents capable of flexible, high-recall, robust real-time cultural transmission from human co-players in a rich 3D physical simulation without using human data in the training pipeline.

AI Machine Learning & Data Science Research

Meet TQP: The First Query Processor to Run On Tensor Computation Runtimes Delivers up to 20x Speedups Over CPU-Only Systems

A research team from the University of Washington, UC San Diego and Microsoft prototypes Tensor Query Processor (TQP), a query processor that runs atop tensor computation runtimes (TCRs) such as PyTorch, TVM, and ONNX Runtime, improving query execution time by up to 20x over CPU-only systems and up to 5x over specialized GPU solutions.

AI Machine Learning & Data Science Research

Princeton U’s DataMUX Enables DNNs to Simultaneously and Accurately Process up to 40 Input Instances With Limited Computational Overhead

In the new paper DataMUX: Data Multiplexing for Neural Networks, a Princeton University research team proposes Data Multiplexing (DataMUX). The novel technique enables neural networks to process multiple inputs simultaneously and generate accurate predictions, increasing model throughput with minimal additional memory requirements.

AI Computer Vision & Graphics Machine Learning & Data Science Research

DeepMind’s Upgraded Hierarchical Perceiver Is Faster, Scales to Larger Data Without Preprocessing, and Delivers Higher Resolution and Accuracy

DeepMind researchers propose Hierarchical Perceiver (HiP), a model that retains the original Perceiver’s ability to process arbitrary modalities but is faster, can scale up to even more inputs/outputs, reduces the need for input engineering, and improves both efficiency and accuracy on classical computer vision benchmarks.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Tsinghua & NKU’s Visual Attention Network Combines the Advantages of Convolution and Self-Attention, Achieves SOTA Performance on CV Tasks

In the new paper Visual Attention Network, a research team from Tsinghua University and Nankai University introduces a novel large kernel attention (LKA) mechanism for an extremely simple and efficient Visual Attention Network (VAN) that significantly outperforms state-of-the-art vision transformers and convolutional neural networks on various computer vision tasks.

AI Machine Learning & Data Science Research

Transformers Meet Online RL: New Study Unifies Offline Pretraining and Online Finetuning, Achieves SOTA Results

A team from Facebook AI Research, UC Berkeley and UCLA proposes Online Decision Transformers (ODT), an RL algorithm based on sequence modelling that incorporates offline pretraining and online finetuning in a unified framework and achieves performance competitive with the state-of-the-art models on the D4RL benchmark.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Google’s MaskGIT Outperforms SOTA Transformer Models on Conditional Image Generation and Accelerates Autoregressive Decoding by up to 64x

A Google Research team proposes Masked Generative Image Transformer (MaskGIT), a novel image synthesis paradigm that uses a bidirectional transformer decoder. MaskGIT significantly outperforms state-of-the-art transformer models on the ImageNet dataset and accelerates autoregressive decoding by up to 64x.