Category: AI

Global machine intelligence updates.

AI Machine Learning & Data Science Research

Adobe’s DMV3D Achieves SOTA Performance for High-Fidelity 3D Objects Generation Within Seconds

A research team innovative single-stage category-agnostic diffusion model. This model can generate 3D Neural Radiance Fields (NeRFs) from either text or a single-image input condition through direct model inference, enabling the creation of diverse high-fidelity 3D objects in just 30s/asset.

AI Machine Learning & Data Science Research

DeepMind’s DiLoCo Revolutionizes Language Model Training with 500× Less Communication

In a new paper DiLoCo: Distributed Low-Communication Training of Language Models, a Google DeepMind research team presents Distributed Low-Communication (DiLoCo). DiLoCo employs a distributed optimization algorithm that facilitates the training of language models on islands of poorly connected devices, surpassing the performance of fully synchronous models while reducing communication by 500 times.

AI Machine Learning & Data Science Research

Microsoft Orca 2’s Triumph: Comparable or Superior Performance to Models 5-10x Its Size in Mastering Reasoning Tasks

Microsoft has recently unveiled Orca 2 in a new paper titled “Orca 2: Teaching Small Language Models How to Reason.” to explore how enhanced training signals can augment the reasoning abilities of smaller language models. Notably, Orca 2 surpasses models of similar size, achieving performance levels comparable to or better than models 5-10 times larger.

AI Machine Learning & Data Science Research

Democratizing Data: How Apple and UW’s Data Filtering Networks Redefine Large-Scale Training Sets

In a new paper Data Filtering Networks, a research team from Apple and University of Washington introduces the concept of data filtering networks (DFNs). These neural networks, specifically designed for data filtration, demonstrate the capacity to generate extensive, high-quality pre-training datasets efficiently.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Adobe & ANU’s LRM Reconstructs Models For Single Image to 3D in 5s

In a new paper LRM: Large Reconstruction Model for Single Image to 3D, a research team from Adobe Research and Australian National Univerisity introduces an innovative Large Reconstruction Model (LRM). This groundbreaking model has the remarkable ability to predict a 3D model of an object from a single input image in a mere 5 seconds.

AI Machine Learning & Data Science Research

Google’s E3 TTS Provides Effortless Approach to High-Quality Audio Synthesis Through Diffusion Models

In a new paper E3 TTS: Easy End-to-End Diffusion-based Text to Speech, a Google research team proposes Easy End-to-End Diffusion-based Text to Speech. This streamlined and efficient text-to-speech model hinges solely on diffusion to preserve temporal structure, allowing it to accept plain text as input and generate audio waveforms directly.

AI Machine Learning & Data Science Research

Apple Repurposes Large Language Models for Reinforcement Learning challenges in Embodied AI

An Apple research team presents Large LAnguage model Reinforcement Learning Policy (LLaRP). LLaRP effectively repurposes LLMs for Reinforcement Learning (RL) challenges within the realm of Embodied Artificial Intelligence (AI), achieving a remarkable 1.7 times higher success rate compared to other established baselines and zero-shot LLM applications.

AI Machine Learning & Data Science Research

Microsoft Azure’s Idea2Img: Enabling Automatic Image Design and Generation with Enhanced Image Quality

A Microsoft Azure AI research team introduces “Idea2Img” in their paper, “Idea2Img: Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation.”, which leverages the capabilities of GPT-4V(ision) to revolutionize the process of automatic image design and generation with enhanced image quality.

AI Machine Learning & Data Science Research

Microsoft’s DeepSpeed-VisualChat: Breaking Boundaries in Multi-Modal Language Models

In a new paper DeepSpeed-VisualChat: Multi-Round Multi-Image Interleave Chat via Multi-Modal Causal Attention, a research team from DeepSpeed of Microsoft presents the DeepSpeed-VisualChat framework, which is designed to optimize LLMs by incorporating multi-modal capabilities, demonstrating superior scalability, even up to a 70 billion parameter model size.

AI Machine Learning & Data Science Research

Yale U & Google’s HyperAttention: Long-Context Attention with the Best Possible Near-Linear Time Guarantee

In a new paper HyperAttention: Long-context Attention in Near-Linear Time, a research team from Yale University and Google Research presents HyperAttention, an approximate attention mechanism not only offers practical efficiency but also delivers the best near-linear time guarantee for long contexts processing.

AI Machine Learning & Data Science Research

NNAISENSE’s New Class of Generative Model: Bayesian Flow Networks Break Barriers in Handing Discrete Data

A NNAISENSE research team introduces a novel class of generative models known as Bayesian Flow Networks (BFNs). These BFNs combine the power of Bayesian inference with neural networks in an iterative modeling process, enabling successful application to continuous, discretized, and discrete data while maintaining competitive performance.

AI Machine Learning & Data Science Research

Standford U’s MAPTree: Redefining Decision Trees – Precision, Speed, and Efficiency Unleashed

In a new paper MAPTree: Beating “Optimal” Decision Trees with Bayesian Decision Trees, a Stanford University research team introduces MAPTree, an algorithm that confidently uncovers the maximum a posteriori tree within Bayesian Classification and Regression Trees (BCART) posterior, achieving strong performance with significantly leaner and faster trees.

AI Machine Learning & Data Science Nature Language Tech Research

The Reversal Curse: Uncovering the Intriguing Limits of Language Models

In a new paper titled “The Reversal Curse: LLMs trained on ‘A is B’ fail to learn ‘B is A'” authored by a collaborative research team from Vanderbilt University, the UK Frontier AI Taskforce, Apollo Research, New York University, the University of Sussex, and the University of Oxford, has unveiled a remarkable shortcoming in auto-regressive large language models (LLMs).

AI Machine Learning & Data Science Nature Language Tech Research

One half-day of training using a few hundred dollars yields similar results to mainstream large models, open-source and commercial-free domain-specific LLM solution

Being at the forefront of cost reduction and efficiency enhancement for large models, the Colossal-AI team maximizes the core capabilities of LLaMA-2. Through innovative training techniques, Colossal-AI has achieved remarkable results by utilizing only approximately 0.0085 trillion tokens of data, investing 15 hours, and incurring training costs in the range of a few hundred dollars.

AI Machine Learning & Data Science Nature Language Tech Research

Unveiling the Enigma: Meta AI & UPC Decodes the Inner Workings of Large Scale Language Models

In a new paper Neurons in Large Language Models: Dead, N-gram, Positional, a research team from Meta AI and Universitat Politècnica de Catalunya conducts comprehensive analysis of a family of Open Pre-trained Transformer Language Models (OPT) up to 66b parameters to provide insights of how feed-forward network (FFN) layers act.

AI Machine Learning & Data Science Research

Equall & Apple’s Revolutionizing Transformers: One Wide Feedforward for Unprecedented Efficiency and Accuracy

A collaborative research effort from Equall and Apple delves into the role of the FFN and uncovers a surprising revelation: despite consuming a significant portion of the model’s parameters, the FFN exhibits high redundancy. As a result, the researchers propose sharing a single FFN across both the encoder and decoder, thereby reducing the parameter count while causing only a modest drop in accuracy.