Tag: Transformers

AI Machine Learning & Data Science Research

Meta’s Dualformer: Bridging Fast and Slow Thinking in Transformers for Superior AI Reasoning

In a new paper Dualformer: Controllable Fast and Slow Thinking by Learning with Randomized Reasoning Traces, a Meta research team presents Dualformer, a single Transformer model that merges both fast and slow reasoning modes within a unified framework.

AI Machine Learning & Data Science Research

Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost

A research team from Huawei and Peking University introduces DiJiang, a groundbreaking Frequency Domain Kernelization approach, which facilitates the transition to a linear complexity model with minimal training overhead, achieving performance akin to LLaMA2-7B across various benchmarks, but at just 1/50th of the training cost.

AI Machine Learning & Data Science Research

DeepMind & Toulouse U Contribute Composable Function Preserving Transformations to Boost Transformer Training

In a new paper Composable Function-preserving Expansions for Transformer Architectures, a research team from Google DeepMind and University of Toulouse introduces parameter expansion transformations for transformer-based neural networks while preserving functionality, enabling the expansion of the capability of the model as needed.

AI Machine Learning & Data Science Research

DeepMind’s Proposes New Paradigm for Interfacing Language Model with Robots Through Rewards

In a new paper Language to Rewards for Robotic Skill Synthesis, a Google DeepMind research team proposes a new paradigm to leverage reward functions to interface language and low-level robot actions, which enables non-technical users to steer novel and intricate robot actions without large amount of data or expert knowledge to engineer low-level primitives.

AI Machine Learning & Data Science Research

From Pixels to UI Actions: Google’s PIX2ACT Agent Learns to Follow Instructions via GUIs

In a new paper From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces, a research team from Google and DeepMind proposes PIX2ACT, a Transformer-based image-to-text model that is able to generate outputs corresponding to mouse and keyboard actions based solely on pixel-based screenshots from Graphical User Interfaces (GUIs).

AI Machine Learning & Data Science Research

Optimizing Transformers: Microsoft & RUC’s ResiDual Solves Gradient Vanishing and Representation Collapse Issues

In the new paper ResiDual: Transformer With Dual Residual Connections, a team from Microsoft Research, Microsoft Azure Translation, and Renmin University of China proposes ResiDual, a novel transformer architecture that fuses the connections in post-layer normalization and pre-layer normalization to exploit the benefits of both while also addressing their limitations.

AI Machine Learning & Data Science Nature Language Tech Research

Google & TAU Explore How Transformer-Based LLMs Extract Knowledge From Their Parameters

In the new paper Dissecting Recall of Factual Associations in Auto-Regressive Language Models, a team from Google DeepMind, Tel Aviv University and Google Research investigates how factual associations are stored and extracted internally in transformer-based language models and provides insights on how such models’ factual predictions are formed.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Look Again, YOLO: Baidu’s RT-DETR Detection Transformer Achieves SOTA Results on Real-Time Object Detection

In the new paper DETRs Beat YOLOs on Real-Time Object Detection, a Baidu Inc. research team presents Real-Time Detection Transformer (RT-DETR), a real-time end-to-end object detector that leverages a hybrid encoder and novel IoU-aware query selection to address inference speed delay issues. RT-DETR outperforms YOLO object detectors in both accuracy and speed.

AI Machine Learning & Data Science Nature Language Tech Research

Microsoft’s MathPrompter Dramatically Improves LLM Performance on Mathematical Reasoning Tasks

In the new paper MathPrompter: Mathematical Reasoning Using Large Language Models, a Microsoft Research team presents MathPrompter, a novel approach that leverages chain-of-thought (CoT) prompting techniques to improve LLM performance on mathematical reasoning problems and increase confidence in their predictions.

AI Machine Learning & Data Science Nature Language Tech Research

DeepMind’s Speculative Sampling Achieves 2–2.5x Decoding Speedups in Large Language Models

In the new paper Accelerating Large Language Model Decoding with Speculative Sampling, a DeepMind research team presents SpS (Speculative Sampling), an algorithm that achieves 2–2.5x decoding speedups on a 70 billion parameter Chinchilla language model. The novel approach maintains sample quality and does not require any modifications to model parameters or architecture.

AI Machine Learning & Data Science Research

Forget About Catastrophic Forgetting: Google’s Continual HyperTransformer Enables Efficient Continual Few-Shot Learning

In the new paper Continual Few-Shot Learning Using HyperTransformers, a Google Research team proposes Continual HyperTransformer, which modifies the recently published HyperTransformer few-shot learning method to sequentially update a convolutional neural network’s weights based on the information in a new task without forgetting the knowledge it learned from previous tasks.

AI Machine Learning & Data Science Research

Meet Tracr: DeepMind & ETH Zurich’s Novel Interpretability Tool Compiles Human-Readable Code to Transformers’ Weights

In the new paper Tracr: Compiled Transformers as a Laboratory for Interpretability, a research team from ETH Zurich and DeepMind presents Tracr, a compiler that addresses the absence of ground truth explanations in deep neural network models by “compiling” human readable code to the weights of a transformer model.

AI Machine Learning & Data Science Research

Google & Lund U’s Optimus Learned Optimization Architecture Efficiently Captures Complex Dependencies

In the new paper Transformer-Based Learned Optimization, a Google Research and Lund University team presents Optimus, an expressive neural network architecture for learned optimization that captures complex dependencies in the parameter space and achieves competitive results on real-world tasks and benchmark optimization problems.

AI Machine Learning & Data Science Research

Stanford U & Google’s Convex Analytic Training Framework Improves the Understanding and Optimization of Transformers

In the new paper Convexifying Transformers: Improving Optimization and Understanding of Transformer Networks, a Stanford University and Google Research team provides a solid theoretical analysis of transformers’ fundamental mechanisms and introduces a novel convex analytic training framework for improving their optimization.

AI Machine Learning & Data Science Research

‘MrsFormer’ Employs a Novel Multiresolution-Head Attention Mechanism to Cut Transformers’ Compute and Memory Costs

In the new paper Transformers with Multiresolution Attention Heads (currently under double-blind review for ICLR 2023), researchers propose MrsFormer, a novel transformer architecture that uses Multiresolution-head Attention to approximate output sequences and significantly reduces head redundancy without sacrificing accuracy.

AI Machine Learning & Data Science Research

Wider, Not Deeper: Cambridge, Oxford & ICL Challenge Conventional Transformer Design Approaches

In the new paper Wide Attention Is The Way Forward For Transformers, a research team from the University of Cambridge, Imperial College London, and the University of Oxford challenges the commonly held belief that deeper is better for transformer architectures, demonstrating that wider layers result in superior performance on natural language processing tasks.

AI Machine Learning & Data Science Research

Transformers on Edge Devices? Monash U’s Energy-Saving Attention With Linear Complexity Reduces Compute Cost by 73%

In the new paper EcoFormer: Energy-Saving Attention with Linear Complexity, a Monash University research team presents EcoFormer, an attention mechanism with linear complexity that replaces expensive multiply-accumulate operations with simple accumulations and achieves a 73 percent energy footprint reduction on ImageNet.

AI Machine Learning & Data Science Nature Language Tech Research

Peking U & Microsoft’s Knowledge Attribution Method Enables Editing Factual Knowledge in Pretrained Transformers Without Fine-Tuning

In the new paper Knowledge Neurons in Pretrained Transformers, a research team from Peking University and Microsoft Research introduces a knowledge attribution method that identifies the neurons that store factual knowledge in pretrained transformers and leverages these neurons to edit factual knowledge in transformers without any fine-tuning.

AI Computer Vision & Graphics Machine Learning & Data Science Research

IITM & UT Austin’s Generalizable NeRF Transformer Demonstrates Transformers’ Capabilities for Graphical Rendering

In the new paper Is Attention All NeRF Needs?, a research team from the Indian Institute of Technology Madras and the University of Texas at Austin proposes Generalizable NeRF Transformer (GNT), a pure and universal transformer-based architecture for efficient on-the-fly reconstruction of NeRFs. The work demonstrates that a pure attention mechanism suffices for learning a physically-grounded rendering process.

AI Machine Learning & Data Science Research

Google Leverages Transformers to Vastly Simplify Neural Video Compression With SOTA Results

In the new paper VCT: A Video Compression Transformer, a Google Research team presents an elegantly simple but powerful video compression transformer (VCT) that does not require architectural biases and priors and learns totally from data without any hand-crafting. VCT is easy to implement and outperforms conventional video compression approaches.

AI Machine Learning & Data Science Research

Tsinghua U & BAAI’s CogView2 Achieves SOTA Competitive Text-to-Image Generation With 10x Speedups

In the new paper CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers, Tsinghua University and the Beijing Academy of Artificial Intelligence researchers pretrain a Cross-Modal general Language Model (CogLM) for text and image token prediction and finetune it for fast super-resolution. The resulting CogView2 hierarchical text-to-image system achieves significant speedups while generating images with better quality at comparable resolutions.