Latest Posts

AI Machine Learning & Data Science Research

Futureverse’ Universal High-Quality Text-to-Music Generator JEN-1 Makes Significant Advancements

In a new paper JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models, a Futureverse research team presents JEN-1, a universal framework that combines bidirectional and unidirectional modes to generate high-quality music conditioned on either text or music representations.

AI Machine Learning & Data Science Research

New Study Unleashes The Power of Large Language Models to Master 16000+ Real World APIs

In a new paper ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs, a research team from Tsinghua University, ModelBest Inc., Renmin University of China, Yale University, Tencent Inc. and Zhihu Inc. presents ToolLLM, a general tool-use framework that demonstrates a compelling capability to master 16464 real-world RESTful APIs

AI Machine Learning & Data Science Research

DeepMind Builds A Precise Mathematical Foundation of Continual Reinforcement Learning

In a new paper A Definition of Continual Reinforcement LearningA Definition of Continual Reinforcement Learning, a DeepMind research team rethinks RL problems as endless adaptation and provides a clean, general, precise mathematical definition of continual reinforcement learning (CRL), aiming to promote researches on CRL from a solid conceptual foundation.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Objaverse-XL: Unleashing 10M+ 3D Objects for Advanced 3D Vision

In a new paper Objaverse-XL: A Universe of 10M+ 3D Objects, a research team from Allen Institute for AI, University of Washington, Columbia University, Stability AI, California Institute of Technology and LAION join force to present Objaverse-XL, a large-scale, web-crawled dataset of 3D assets, which provides substantially richer variety and quality data that aims to boost the performance of state-of-the-art 3D models.

AI Machine Learning & Data Science Research

65-Billion-Parameter Large Model Pretraining Accelerated by 38%, Best Practices for Building LLaMA-like Base Models Open-Source

Colossal-AI—the world’s largest and most active big model development tool and community—utilizes the current most widely used large model, LLaMA, to provide an example of the tool’s groundbreaking pre-training solutions for the 65 billion parameter large model which improves the training speed by 38%.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Shanghai AI Lab, CUHK & Stanford U Extend Personalized Text-to-Image Diffusion Models Into Animation Generators Without Tuning

In a new paper AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning, a research team presents AnimateDiff, a general and practical framework that is able to generate animated images for any personalized text-to-image (T2I) model, without any extra training and model-specified tuning.

AI Machine Learning & Data Science Nature Language Tech Research

Microsoft’s new Pareto Optimal Self-Supervision Framework Automatically Corrects Language Models to Boost GPT SOTA Records

In a new paper Automatic Calibration and Error Correction for Large Language Models via Pareto Optimal Self-Supervision, a Microsoft team research team presents Pareto optimal self-supervision, a flexible framework that leverages programmatic supervision to automatically calibrate and correct error for Large language models without extra manual efforts.

AI Machine Learning & Data Science Research

DeepMind’s Proposes New Paradigm for Interfacing Language Model with Robots Through Rewards

In a new paper Language to Rewards for Robotic Skill Synthesis, a Google DeepMind research team proposes a new paradigm to leverage reward functions to interface language and low-level robot actions, which enables non-technical users to steer novel and intricate robot actions without large amount of data or expert knowledge to engineer low-level primitives.

AI Computer Vision & Graphics Machine Learning & Data Science Research

DeepMind Unlocks Web-Scale Training for Open-World Detection

In a new paper Scaling Open-Vocabulary Object Detection, a DeepMind research team introduces OWLv2 model, an optimized architecture with improved training efficiency and applies and OWL-ST self-training recipe to the proposed OWLv2 to substantially improves detection performance, achieving state-of-the-art result on open-vocabulary detection task.

AI Machine Learning & Data Science Research

OpenAI Startup Fund’s Portfolio Company Improves RVQGAN: 90x Compression of 44.1 KHz Audio at 8kbps Bandwidth

In a new paper High-Fidelity Audio Compression with Improved RVQGAN, a Descript research team presents Improved RVQGAN, a high fidelity universal audio compression model that combines advances in high-fidelity audio generation and improved adversarial and reconstruction losses to achieve 90x compression of 44.1 KHz audio at only 8kbps bandwidth.

AI Machine Learning & Data Science Research

Samsung & Meta AI’s Adaptive Parameter-Free Learning Rate Method Matches Hand-Tuned Adam Optimizer

In a new paper Prodigy: An Expeditiously Adaptive Parameter-Free Learner, a research team from Samsung AI Center and Meta AI presents two novel modifications, Prodigy and Resetting, to enhance the D-Adaptation method’s worst-case non-asymptotic convergence rate, achieving faster convergence rates and better optimization outputs.

AI Computer Vision & Graphics Machine Learning & Data Science Research

DeepMind Claims Image Captioner Alone Is Surprisingly Powerful then Previous Believed, Competing with CLIP

In a new paper Image Captioners Are Scalable Vision Learners Too, a DeepMind research team presents CapPa, a image captioning based pretraining strategy that and can compete CLIP and exhibit favorable model and data scaling properties, verifying that a plain image captioning can be a competitive pretraining strategy for vision backbones.

AI Machine Learning & Data Science Research

From Pixels to UI Actions: Google’s PIX2ACT Agent Learns to Follow Instructions via GUIs

In a new paper From Pixels to UI Actions: Learning to Follow Instructions via Graphical User Interfaces, a research team from Google and DeepMind proposes PIX2ACT, a Transformer-based image-to-text model that is able to generate outputs corresponding to mouse and keyboard actions based solely on pixel-based screenshots from Graphical User Interfaces (GUIs).

AI Machine Learning & Data Science Research

Salesforce AI’s CodeTF Library Facilitates Easy LLM Integration for Code Intelligence Tasks

In a new paper CodeTF: One-stop Transformer Library for State-of-the-art Code LLM, a Salesforce AI research team develop CodeTF, an open-source one-stop comprehensive Python library that provides a seamless interface for training and inferencing on code intelligence tasks, aiming to facilitate easy integration of state-of-the-art language models into real-world applications.