Category: Research

Technical review of the newest machine intelligence research.

AI Machine Learning & Data Science Research

Google & CMU’s Semantic Pyramid AutoEncoder Marks the First Successful Attempt for Multimodal Generation with Frozen LLMs

In a new paper SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs, a research team from Google Research and Carnegie Mellon University introduces Semantic Pyramid AutoEncoder (SPACE), the first successful method for enabling frozen LLMs to solve cross-modal tasks.

AI Machine Learning & Data Science Research

DeepMind Collaborates on Shaping Personality Traits in LLMs

In a new paper Personality Traits in Large Language Models, a research team from Google, Cambridge University and Keio University proposes principled, validated methods to construct validity of characterizing personalities in LLM, simulates population variance in LLM responses and develops a personality shaping mechanism to control LLM personality traits.

AI Machine Learning & Data Science Nature Language Tech Research

Microsoft’s new Pareto Optimal Self-Supervision Framework Automatically Corrects Language Models to Boost GPT SOTA Records

In a new paper Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision, a Microsoft team research team presents Pareto optimal self-supervision, a flexible framework that leverages programmatic supervision to automatically calibrate and correct error for Large language models without extra manual efforts.

AI Machine Learning & Data Science Research

DeepMind’s Proposes New Paradigm for Interfacing Language Model with Robots Through Rewards

In a new paper Language to Rewards for Robotic Skill Synthesis, a Google DeepMind research team proposes a new paradigm to leverage reward functions to interface language and low-level robot actions, which enables non-technical users to steer novel and intricate robot actions without large amount of data or expert knowledge to engineer low-level primitives.

AI Machine Learning & Data Science Research

FastSAM Drastically Reduces Cost to Provide Real-Time Solution for Segment Anything Model

In a new paper Fast Segment Anything, a research team from Chinese Academy of Sciences, University of Chinese Academy of Sciences, Objecteye Inc. and Wuhan AI Research presents FastSAM, a real-time solution for the segment anything task that achieves comparable performance to SAM while drastically reducing computational demands.

AI Computer Vision & Graphics Machine Learning & Data Science Research

DeepMind Unlocks Web-Scale Training for Open-World Detection

In a new paper Scaling Open-Vocabulary Object Detection, a DeepMind research team introduces OWLv2 model, an optimized architecture with improved training efficiency and applies and OWL-ST self-training recipe to the proposed OWLv2 to substantially improves detection performance, achieving state-of-the-art result on open-vocabulary detection task.

AI Machine Learning & Data Science Research

Princeton U’s Infinigen Provides Infinite Photorealistic 3D Scenes Generation of the Natural World

In a new paper Infinite Photorealistic Worlds using Procedural Generation, a Princeton University research team presents Infinigen, a procedural photorealistic 3D scenes generator that is capable to generate unlimited, diverse training data of the natural world, substantially expands the coverage of existing synthetic data.

AI Machine Learning & Data Science Research

OpenAI Startup Fund’s Portfolio Company Improves RVQGAN: 90x Compression of 44.1 KHz Audio at 8kbps Bandwidth

In a new paper High-Fidelity Audio Compression with Improved RVQGAN, a Descript research team presents Improved RVQGAN, a high fidelity universal audio compression model that combines advances in high-fidelity audio generation and improved adversarial and reconstruction losses to achieve 90x compression of 44.1 KHz audio at only 8kbps bandwidth.

AI Machine Learning & Data Science Research

Samsung & Meta AI’s Adaptive Parameter-Free Learning Rate Method Matches Hand-Tuned Adam Optimizer

In a new paper Prodigy: An Expeditiously Adaptive Parameter-Free Learner, a research team from Samsung AI Center and Meta AI presents two novel modifications, Prodigy and Resetting, to enhance the D-Adaptation method’s worst-case non-asymptotic convergence rate, achieving faster convergence rates and better optimization outputs.

AI Computer Vision & Graphics Machine Learning & Data Science Research

DeepMind Claims Image Captioner Alone Is Surprisingly Powerful then Previous Believed, Competing with CLIP

In a new paper Image Captioners Are Scalable Vision Learners Too, a DeepMind research team presents CapPa, a image captioning based pretraining strategy that and can compete CLIP and exhibit favorable model and data scaling properties, verifying that a plain image captioning can be a competitive pretraining strategy for vision backbones.

AI Machine Learning & Data Science Research

From Pixels to UI Actions: Google’s PIX2ACT Agent Learns to Follow Instructions via GUIs

In a new paper From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces, a research team from Google and DeepMind proposes PIX2ACT, a Transformer-based image-to-text model that is able to generate outputs corresponding to mouse and keyboard actions based solely on pixel-based screenshots from Graphical User Interfaces (GUIs).

AI Machine Learning & Data Science Research

Salesforce AI’s CodeTF Library Facilitates Easy LLM Integration for Code Intelligence Tasks

In a new paper CodeTF: One-stop Transformer Library for State-of-the-art Code LLM, a Salesforce AI research team develop CodeTF, an open-source one-stop comprehensive Python library that provides a seamless interface for training and inferencing on code intelligence tasks, aiming to facilitate easy integration of state-of-the-art language models into real-world applications.

AI Machine Learning & Data Science Research

Microsoft’s LLaVA-Med Trains a Large Language-and-Vision Assistant for Biomedicine Within 15 Hours

In a new paper LLaVA-Med: Training a Large Language-and-Vision Assistant, a Microsoft research team proposes a Large Language and Vision Assistant for BioMedicine (LLaVA-Med), which can be trained in less than 15 hours and demonstrates strong multimodal conversational capability, aiding inquiries about biomedical image.

AI Machine Learning & Data Science Research

DeepMind, Mila & Montreal U’s Bigger, Better, Faster RL Agent Achieves Super-human Performance on Atari 100K

In a new paper Bigger, Better, Faster: Human-level Atari with human-level efficiency, a research team from Google DeepMind, Mila and Universite de Montreal presents a value-based RL agent, which they call faster, better, faster (BBF), that achieves super-human performance on the Atari 100K benchmark on single GPU.

AI Machine Learning & Data Science Research

Google & Waterloo U Scales Generative Retrieval to Handle 8.8M Passages

In a new paper How Does Generative Retrieval Scale to Millions of Passages? a research team from Google Research and University of Waterloo performs the first empirical study of generative retrieval across various corpus scales, even scaling up to the entire MS MARCO passage ranking task that contains 8.8M passages, aiming to provide insights on scaling generative retrieval to millions of passages.

AI Machine Learning & Data Science Research

Google & Stanford U’s DoReMi Significantly Speeds Up Language Model Pretraining

In the new paper DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, a research team from Google and Stanford University introduces Domain Reweighting with Minimax Optimization (DoReMi), a domain weight optimization strategy that leverages distributionally robust optimization (DRO) to substantially speed up effective language model pretraining.

AI Machine Learning & Data Science Research

Alibaba & HUST’s ONE-PEACE: Toward a General Representation Model For Unlimited Modalities

In the new paper ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities, a research team from Alibaba Group’s DAMO Academy and the Huazhong University of Science and Technology releases ONE-PEACE, a highly extensible model that can align and integrate representations across vision, audio, and language modalities; opening a path toward the creation of a general representation model for unlimited modalities.

AI Machine Learning & Data Science Research

Salesforce AI’s CodeT5+ Open Code LLMs Flexibly Adapt to Diverse Downstream Code Understanding and Generation Tasks

In the new paper CodeT5+: Open Code Large Language Models for Code Understanding and Generation, a Salesforce AI Research team presents CodeT5+, a novel family of encoder-decoder code foundation large language models that can be flexibly adapted to a wide range of code understanding and generation tasks and outperform various code-related benchmarks.

AI Machine Learning & Data Science Nature Language Tech Research

‘May the Source Be With You!’ – BigCode’s Open-Access StarCoder Outperforms All Existing Open Code LLMs

In the new paper StarCoder: May the Source Be With You!, the BigCode community releases StarCoder and StarCoderBase, 15.5B parameter open-access large language models (LLMs) trained on 80+ programming languages. StarCoderBase outperforms all multi-programming-language code LLMs, and StarCoder surpasses all models fine-tuned on Python.

AI Machine Learning & Data Science Research

Meet VideoChat: Integrating Language and Video Models to Boost Video Understanding

In the new paper VideoChat: Chat-Centric Video Understanding, a research team from Shanghai AI Laboratory, Nanjing University, the University of Hong Kong, and the Chinese Academy of Sciences presents VideoChat, a groundbreaking end-to-end chat-centric video understanding system that leverages state-of-the-art video and language models to improve spatiotemporal reasoning, event localization, and causal relationship inference.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Georgia Tech’s ZipIt! Effectively Merges Vision Models Trained on Disjoint Tasks Without Additional Training

In the new paper ZipIt! Merging Models from Different Tasks Without Training, a Georgia Tech research team proposes ZipIt!, a general method that exploits redundant features to combine two or more models with the same architecture but trained on different tasks into one multi-task model without additional training.

AI Machine Learning & Data Science Research

Microsoft’s Automatic Prompt Optimization Improves Prompts to Boost LLM Performance

In the new paper Automatic Prompt Optimization with “Gradient Descent” and Beam Search, a Microsoft research team presents Automatic Prompt Optimization, a simple and general prompt optimization algorithm that automatically improves prompts for large language models, significantly reducing the time and energy spent on manual prompting approaches.

AI Machine Learning & Data Science Research

Optimizing Transformers: Microsoft & RUC’s ResiDual Solves Gradient Vanishing and Representation Collapse Issues

In the new paper ResiDual: Transformer With Dual Residual Connections, a team from Microsoft Research, Microsoft Azure Translation, and Renmin University of China proposes ResiDual, a novel transformer architecture that fuses the connections in post-layer normalization and pre-layer normalization to exploit the benefits of both while also addressing their limitations.

AI Machine Learning & Data Science Nature Language Tech Research

Google & TAU Explore How Transformer-Based LLMs Extract Knowledge From Their Parameters

In the new paper Dissecting Recall of Factual Associations in Auto-Regressive Language Models, a team from Google DeepMind, Tel Aviv University and Google Research investigates how factual associations are stored and extracted internally in transformer-based language models and provides insights on how such models’ factual predictions are formed.