Tag: Machine Learning

AI Machine Learning & Data Science Research

Google & Waterloo U Scales Generative Retrieval to Handle 8.8M Passages

In a new paper How Does Generative Retrieval Scale to Millions of Passages? a research team from Google Research and University of Waterloo performs the first empirical study of generative retrieval across various corpus scales, even scaling up to the entire MS MARCO passage ranking task that contains 8.8M passages, aiming to provide insights on scaling generative retrieval to millions of passages.

AI Machine Learning & Data Science Research

Google & Stanford U’s DoReMi Significantly Speeds Up Language Model Pretraining

In the new paper DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining, a research team from Google and Stanford University introduces Domain Reweighting with Minimax Optimization (DoReMi), a domain weight optimization strategy that leverages distributionally robust optimization (DRO) to substantially speed up effective language model pretraining.

AI Machine Learning & Data Science Research

Tool Up! DeepMind, Princeton & Stanford’s LATM Enables LLMs to Make Their Own Tools

In the new paper Large Language Models as Tool Makers, a research team from Google DeepMind, Princeton University and Stanford University presents LATM (large language models as tool makers), a closed-loop framework that enables LLMs to create their own reusable tools to boost efficiency and enhance their problem-solving capabilities.

AI Machine Learning & Data Science Research

Alibaba & HUST’s ONE-PEACE: Toward a General Representation Model For Unlimited Modalities

In the new paper ONE-PEACE: Exploring One General Representation Model Toward Unlimited Modalities, a research team from Alibaba Group’s DAMO Academy and the Huazhong University of Science and Technology releases ONE-PEACE, a highly extensible model that can align and integrate representations across vision, audio, and language modalities; opening a path toward the creation of a general representation model for unlimited modalities.

AI Machine Learning & Data Science Research

Salesforce AI’s CodeT5+ Open Code LLMs Flexibly Adapt to Diverse Downstream Code Understanding and Generation Tasks

In the new paper CodeT5+: Open Code Large Language Models for Code Understanding and Generation, a Salesforce AI Research team presents CodeT5+, a novel family of encoder-decoder code foundation large language models that can be flexibly adapted to a wide range of code understanding and generation tasks and outperform various code-related benchmarks.

AI Machine Learning & Data Science Nature Language Tech Research

‘May the Source Be With You!’ – BigCode’s Open-Access StarCoder Outperforms All Existing Open Code LLMs

In the new paper StarCoder: May the Source Be With You!, the BigCode community releases StarCoder and StarCoderBase, 15.5B parameter open-access large language models (LLMs) trained on 80+ programming languages. StarCoderBase outperforms all multi-programming-language code LLMs, and StarCoder surpasses all models fine-tuned on Python.

AI Machine Learning & Data Science Research

Meet VideoChat: Integrating Language and Video Models to Boost Video Understanding

In the new paper VideoChat: Chat-Centric Video Understanding, a research team from Shanghai AI Laboratory, Nanjing University, the University of Hong Kong, and the Chinese Academy of Sciences presents VideoChat, a groundbreaking end-to-end chat-centric video understanding system that leverages state-of-the-art video and language models to improve spatiotemporal reasoning, event localization, and causal relationship inference.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Georgia Tech’s ZipIt! Effectively Merges Vision Models Trained on Disjoint Tasks Without Additional Training

In the new paper ZipIt! Merging Models from Different Tasks Without Training, a Georgia Tech research team proposes ZipIt!, a general method that exploits redundant features to combine two or more models with the same architecture but trained on different tasks into one multi-task model without additional training.

AI Machine Learning & Data Science Research

Microsoft’s Automatic Prompt Optimization Improves Prompts to Boost LLM Performance

In the new paper Automatic Prompt Optimization with “Gradient Descent” and Beam Search, a Microsoft research team presents Automatic Prompt Optimization, a simple and general prompt optimization algorithm that automatically improves prompts for large language models, significantly reducing the time and energy spent on manual prompting approaches.

AI Machine Learning & Data Science Research

Optimizing Transformers: Microsoft & RUC’s ResiDual Solves Gradient Vanishing and Representation Collapse Issues

In the new paper ResiDual: Transformer With Dual Residual Connections, a team from Microsoft Research, Microsoft Azure Translation, and Renmin University of China proposes ResiDual, a novel transformer architecture that fuses the connections in post-layer normalization and pre-layer normalization to exploit the benefits of both while also addressing their limitations.

AI Machine Learning & Data Science Nature Language Tech Research

Google & TAU Explore How Transformer-Based LLMs Extract Knowledge From Their Parameters

In the new paper Dissecting Recall of Factual Associations in Auto-Regressive Language Models, a team from Google DeepMind, Tel Aviv University and Google Research investigates how factual associations are stored and extracted internally in transformer-based language models and provides insights on how such models’ factual predictions are formed.

AI Machine Learning & Data Science Research

Microsoft & Peking U’s WizardLM Enables LLMs to Automatically Mass-Produce Complex Instructions

In the new paper WizardLM: Empowering Large Language Models to Follow Complex Instructions, a research team from Microsoft and Peking University presents Evol-Instruct, a novel approach that leverages LLMs to automatically generate large amounts of instruction data with varying levels of complexity. In human evaluations, the team’s resulting WizardLM model’s generated instructions were judged superior to human-created instruction datasets.

AI Machine Learning & Data Science Research

UC Berkeley’s FastRLAP Learns Aggressive and Effective High-Speed Driving Strategies With <20 Minutes of Real-World

In the new paper FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing, a UC Berkeley research team proposes FastRLAP (Fast Reinforcement Learning via Autonomous Practicing), a system that autonomously practices in the real world and learns aggressive maneuvers to enable effective high-speed driving.

AI Machine Learning & Data Science Research

Microsoft’s NaturalSpeech 2 Outperforms Previous TTS Systems in Zero-Shot Speech and Singing Synthesis

In the new paper NaturalSpeech 2: Latent Diffusion Models are Natural and Zero-Shot Speech and Singing Synthesizers, a Microsoft team introduces NaturalSpeech 2, a TTS system with latent diffusion models for natural and strong zero-shot voice synthesis that captures expressive prosodies with superior robustness.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Look Again, YOLO: Baidu’s RT-DETR Detection Transformer Achieves SOTA Results on Real-Time Object Detection

In the new paper DETRs Beat YOLOs on Real-Time Object Detection, a Baidu Inc. research team presents Real-Time Detection Transformer (RT-DETR), a real-time end-to-end object detector that leverages a hybrid encoder and novel IoU-aware query selection to address inference speed delay issues. RT-DETR outperforms YOLO object detectors in both accuracy and speed.

AI Machine Learning & Data Science Research

Huawei’s DiffFit Unlocks the Transferability of Large Diffusion Models to New Domains

In the new paper DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning, a Huawei Noah’s Ark Lab research team introduces DiffFit, a parameter-efficient fine-tuning technique that enables fast adaptation to new domains for diffusion image generation. Compared to full fine-tuning approaches, DiffFit achieves 2x training speed-ups while using only ~0.12 percent of trainable parameters.

AI Machine Learning & Data Science Research

DeepMind & MPG Establish a Research Program for Meta-Learned Models of Cognition

In the new paper Meta-Learned Models of Cognition, a team from the Max Planck Institute for Biological Cybernetics (Max-Planck-Gesellschaft, MPG) and DeepMind proposes the establishment of a research program focused on meta-learned models of cognition. The team cites machine learning papers demonstrating how meta-learning can be used to construct Bayes-optimal learning algorithms and suggests it can significantly expand the scope of the rational analysis of cognition.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Microsoft & Bath U’s SpectFormer Significantly Improves Vision Transformers via Frequency and Attention

In the new paper SpectFormer: Frequency and Attention Is What You Need in a Vision Transformer, a research team from Microsoft and the University of Bath proposes Spectformer, a novel transformer architecture that combines spectral and multi-headed attention layers to better capture appropriate feature representations and improve performance.

AI Machine Learning & Data Science Nature Language Tech Research

Microsoft’s LLMA Accelerates LLM Generations via an ‘Inference-With-Reference’ Decoding Approach

In the new paper Inference with Reference: Lossless Acceleration of Large Language Models, a Microsoft research team proposes LLMA, an inference-with-reference decoding mechanism that achieves up to 2x lossless speed-ups with identical generation results by exploiting the overlaps between LLM outputs and references.

AI Machine Learning & Data Science Research

Meet TaskMatrix.AI: A Microsoft ‘Super-AI’ That Links Foundation Models With Millions of APIs to Perform Diverse Tasks

In the new paper TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with Millions of APIs, a Microsoft research team proposes TaskMatrix.AI, a novel ecosystem that connects foundation models with millions of existing models and system APIs to build a “super-AI” capable of addressing a wide range of digital and physical tasks.

AI Machine Learning & Data Science Research

Revolutionizing Games: Parametrix.ai Unveils the Potential of Virtual Interactive Experiences Powered by AI NPCs

A recent tech demo called “Living Chang’an City” has been garnering attention. In this video, AI-powered NPCs can be seen roaming the streets of Chang’an City, each possessing unique identities and short-term and long-term goals. They engage in various life-like interactions, such as chatting, shopping and even falling in love.

AI Machine Learning & Data Science Nature Language Tech Research

ColossalChat: An Open-source Solution for Cloning ChatGPT with A Complete RLHF Pipeline

Colossal-AI open sources a complete RLHF pipeline that includes supervised data collection, supervised fine-tuning, reward model training, and reinforcement learning fine-tuning, based on the LLaMA pre-trained model, and shares ColossalChat, the most practical open-source project that closely resembles the original ChatGPT technical solution!

AI Machine Learning & Data Science Nature Language Tech Research

Google’s CoLT5 Processes Extremely Long Inputs via Conditional Computation

A Google Research team addresses transformers’ input sequence limitations in the new paper CoLT5: Faster Long-Range Transformers with Conditional Computation, proposing CoLT5 (Conditional LongT5), a family of models that applies a novel conditional computation approach for higher quality and faster long-input processing of up to 64,000 tokens.

AI Machine Learning & Data Science Nature Language Tech Research

OpenAI, Open Research & UPenn Paper Considers How GPTs Will Impact the US Labour Market

In the new paper GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, a research team from OpenAI, OpenResearch, and the University of Pennsylvania investigates the potential impact of LLMs like GPT on the US labour market, shedding light on the economic, social, and policy implications.