Tag: technology

AI Machine Learning & Data Science Research

Harnessing Hundreds of GPU Power: NVIDIA’s NeMo-Aligner Unleashes Potential for Large Model Alignment

In a new paper NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment, a team of researchers from Nvidia introduces NeMo-Aligner, a toolkit designed for large-scale LLM model alignment that can efficiently harness the power of hundreds of GPUs for training.

AI Machine Learning & Data Science Research

Revolutionizing Video Understanding: Real-Time Captioning for Any Length with Google’s Streaming Model

In a new paper Streaming Dense Video Captioning, a Google research team proposes a streaming dense video captioning model, which revolutionizes dense video captioning by enabling the processing of videos of any length and making predictions before the entire video is fully analyzed, thus marking a significant advancement in the field.

AI Machine Learning & Data Science Research

Huawei & Peking U’s DiJiang: A Transformer Achieving LLaMA2-7B Performance at 1/50th the Training Cost

A research team from Huawei and Peking University introduces DiJiang, a groundbreaking Frequency Domain Kernelization approach, which facilitates the transition to a linear complexity model with minimal training overhead, achieving performance akin to LLaMA2-7B across various benchmarks, but at just 1/50th of the training cost.

AI Machine Learning & Data Science Research

Stanford’s VideoAgent Achieves New SOTA of Long-Form Video Understanding via Agent-Based System

In a new paper VideoAgent: Long-form Video Understanding with Large Language Model as Agent, a Stanford University research team introduces VideoAgent, an innovative approach simulates human comprehension of long-form videos through an agent-based system, showcasing superior effectiveness and efficiency compared to current state-of-the-art methods.

AI Machine Learning & Data Science Nature Language Tech Research

Nomic Embed: The Inaugural Open-Source Long Text Embedding Model Outshining OpenAI’s Finest

In a new paper Nomic Embed: Training a Reproducible Long Context Text Embedder, a Nomic AI research team introduces nomic-embed-text-v1, which marks the inception of the first fully reproducible, open-source, open-weights, open-data text embedding model, capable of handling an extensive context length of 8192 in English.

AI Machine Learning & Data Science Research

Google and UT Austin’s Game-Changing Approach Distills Vision-Language Models on Millions of Videos

In a new paper Distilling Vision-Language Models on Millions of Videos, a research team introduces a straightforward yet highly effective method to adapt image-based vision-language models to video. The approach involves generating high-quality pseudo-captions for millions of videos, outperforming state-of-the-art methods across various video-language benchmarks.

AI Machine Learning & Data Science Research

Nature’s New Breakthrough: Control Human Language Network via Large Language Model

In a new breakthrough paper Driving and suppressing the human language network using large language models, a research team from Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, University of Minnesota and Harvard University leverages a GPT-based encoding model to identify sentences predicted to elicit specific responses within the human language network.

AI Machine Learning & Data Science Research

Google’s AMIE Marks A Significant Milestone Toward Conversational Diagnostic AI

In a new paper Towards Conversational Diagnostic AI, a research team from Google Research and Google DeepMind introduces AMIE (Articulate Medical Intelligence Explorer), an LLM-based AI system meticulously optimized for clinical history-taking and diagnostic dialogues, showcasing superior diagnostic accuracy and outperforming primary care physicians (PCPs).

AI Machine Learning & Data Science Nature Language Tech Research

LangSplat: Turbocharging 3D Language Fields with a Mind-Blowing 199x Speed Boost

In a new paper LangSplat: 3D Language Gaussian Splattin, a research team from Tsinghua University and Harvard University introduces LangSplat, a groundbreaking 3D Gaussian Splatting-based method designed for 3D language fields, which surpasses the state-of-the-art LERF method while boasting a remarkable speed improvement of 199 times.

AI Machine Learning & Data Science Research

Gemini: Bridging Tomorrow’s Deep Neural Network Frontiers with Unrivaled Chiplet Accelerator Mastery

A research team introduces Gemini, an innovative framework, focusing on both architecture and mapping co-exploration, aims to propel large-scale DNN chiplet accelerators to new heights, achieving an impressive average performance improvement of 1.98× and an energy efficiency boost of 1.41× compared to the state-of-the-art Simba architecture.