Latest Posts

AI Machine Learning & Data Science Research

Introducing SpikeGPT: UCSC & Kuaishou’s LLM With Spiking Neural Networks Slashes Language Generation Costs

In the new paper SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks, a research team from the University of California and Kuaishou Technology presents SpikeGPT, the first generative spiking neural network language model. The team’s largest, 260M parameter version achieves DNN-level performance while maintaining the energy efficiency of spike-based computations.

AI Machine Learning & Data Science Nature Language Tech Research

Tackling Hallucinations: Microsoft’s LLM-Augmenter Boosts ChatGPT’s Factual Answer Score

In the new paper Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback, a Microsoft Research and Columbia University team presents LLM-Augmenter, a system that augments black-box large language models with a set of plug-and-play modules to significantly improve the factuality of their responses.

AI Machine Learning & Data Science Research

Google’s ROSIE Data Augmentation Strategy Scales Robot Learning With Semantically Imagined Experience

In a new paper Scaling Robot Learning with Semantically Imagined Experience, a team from Robotics at Google and Google Research Robot proposes Learning with Semantically Imagined Experience (ROSIE), a general and semantically-aware data augmentation strategy that leverages text-to-image models to obtain data for robot learning.

AI Machine Learning & Data Science Nature Language Tech Research

CMU & Inspired Cognition’s DocPrompting Improves Code Generation by Retrieving Relevant Documentation

In the new paper DocPrompting: Generating Code by Retrieving the Docs, a research team from Carnegie Mellon University and Inspired Cognition presents DocPrompting, a natural-language-to-code generation approach. Tasked with generating code to unseen functions or libraries from a natural language intent, DocPrompting retrieves corresponding code documentation to enable the model to learn to perform the task.

AI Machine Learning & Data Science Research

Meta Heats Up the AI Race With Their State-Of-The-Art Foundation Language Model LLaMA

Meta AI reveals the technical details of their LLaMA collection of foundation language models in the new paper LLaMA: Open and Efficient Foundation Language Models. The LLaMA models were trained on trillions of tokens and achieve performance competitive with state-of-the-art models such as GPT-3 and PaLM while being much smaller and using only publicly available training data.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Oxford U Presents RealFusion: 360° Reconstructions of Any Object from a Single Image

In the new paper RealFusion: 360° Reconstruction of Any Object from a Single Image, an Oxford University research team leverages a diffusion model to generate 360° reconstructions of objects from a single image. Their RealFusion approach achieves state-of-the-art performance on monocular 3D reconstruction benchmarks.

AI Machine Learning & Data Science Research

Google & UCLA Formulate Algorithm Discovery as Program Search, Yielding ‘Lion’ for SOTA DNN Optimization

In the new paper Symbolic Discovery of Optimization Algorithms, a research team from Google and UCLA presents a method for formulating algorithm discovery as program search and applies it to find EvoLved Sign Momentum (Lion), a simple and effective optimization algorithm that surpasses state-of-the-art methods while reducing computation costs.

AI Machine Learning & Data Science Nature Language Tech Research

DeepMind’s Speculative Sampling Achieves 2–2.5x Decoding Speedups in Large Language Models

In the new paper Accelerating Large Language Model Decoding with Speculative Sampling, a DeepMind research team presents SpS (Speculative Sampling), an algorithm that achieves 2–2.5x decoding speedups on a 70 billion parameter Chinchilla language model. The novel approach maintains sample quality and does not require any modifications to model parameters or architecture.

AI Machine Learning & Data Science Research

Genius or Subpar AI Mathematician? New Study Questions ChatGPT’s Mathematical Capabilities

In the new paper Mathematical Capabilities of ChatGPT, an international research team tests ChatGPT’s mathematical capabilities and evaluates its suitability as an assistant to professional mathematicians. The team concludes that despite the glowing reviews in mainstream media, ChatGPT’s mathematical abilities “are significantly below those of an average mathematics graduate student.”

AI Machine Learning & Data Science Nature Language Tech Research

Stanford U’s DetectGPT Takes a Curvature-Based Approach to LLM-Generated Text Detection

In the new paper DetectGPT: Zero-Shot Machine-Generated Text Detection Using Probability Curvature, a Stanford University research team presents DetectGPT, a zero-shot machine-generated text detection algorithm that uses probability curvature to predict whether a candidate passage was generated by a large language model.

AI Machine Learning & Data Science Research

Microsoft & UCLA Introduce ClimaX: A Foundation Model for Climate and Weather Modelling

In the new paper ClimaX: A Foundation Model for Weather and Climate, a team from Microsoft Autonomous Systems and Robotics Research, Microsoft Research AI4Science and the University of California at Los Angeles presents ClimaX, a foundation model for weather and climate that can be efficiently adapted for general-purpose tasks related to the Earth’s atmosphere.

AI Machine Learning & Data Science Research

Stanford U’s Brain-Computer Interface Enables Stroke and ALS Patients to ‘Speak’ 62 Words per Minute

A Stanford University research team presents a brain-computer interface for translating speech-related neural activity into text (speech BCI) in the new paper A High-performance Speech Neuroprosthesis. Theirs is the first speech BCI to record impulse activity from intracortical microelectrode arrays and could benefit people unable to produce clear utterances due to diseases such as stroke and ALS.

AI Machine Learning & Data Science Research

Forget About Catastrophic Forgetting: Google’s Continual HyperTransformer Enables Efficient Continual Few-Shot Learning

In the new paper Continual Few-Shot Learning Using HyperTransformers, a Google Research team proposes Continual HyperTransformer, which modifies the recently published HyperTransformer few-shot learning method to sequentially update a convolutional neural network’s weights based on the information in a new task without forgetting the knowledge it learned from previous tasks.

AI Machine Learning & Data Science Research

Meet Tracr: DeepMind & ETH Zurich’s Novel Interpretability Tool Compiles Human-Readable Code to Transformers’ Weights

In the new paper Tracr: Compiled Transformers as a Laboratory for Interpretability, a research team from ETH Zurich and DeepMind presents Tracr, a compiler that addresses the absence of ground truth explanations in deep neural network models by “compiling” human readable code to the weights of a transformer model.

AI Machine Learning & Data Science Research

BERT-Style Pretraining on Convnets? Peking U, ByteDance & Oxford U’s Sparse Masked Modelling With Hierarchy Leads the Way

In the new paper Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling, a research team from Peking University, ByteDance, and the University of Oxford presents Sparse Masked Modelling with Hierarchy (SparK), the first BERT-style pretraining approach that can be used on convolutional models without any backbone modifications.

AI Machine Learning & Data Science Research

Microsoft’s Neural Codec Language Models Synthesize High-Quality Personalized Speech From a 3-Second Sample

In the new paper Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers, a Microsoft research team presents VALL-E, the first language model-based text-to-speech (TTS) system with strong in-context learning. VALL-E achieves state-of-the-art personalized speech synthesis quality via prompting in a zero-shot setting.

AI Machine Learning & Data Science Research

Baidu Create 2022 Forum Details Strategy for Next-Level AI-Enhanced Creativity via Feedback-Driven Innovation

Baidu, Inc. today hosted its annual flagship developer conference Baidu Create 2022. In the meeting, Baidu offered an in-depth exploration of Baidu’s research and analysis of future technology trends, covering a range of emerging technologies including artificial intelligence, autonomous driving, intelligent search, quantum computing and AI scientific computing.

AI Machine Learning & Data Science Research

Google’s Masked Generative Transformers Achieve SOTA Text-To-Image Performance With Improved Efficiency

In the new paper Muse: Text-To-Image Generation via Masked Generative Transformers, a Google Research team introduces Muse, a transformer-based text-to-image synthesis model that leverages masked image modelling to achieve state-of-the-art performance while being significantly faster than diffusion or autoregressive models.

AI Machine Learning & Data Science Research

Stanford & Buffalo U Advance Language Modelling with State Space Models

In the new paper Hungry Hungry Hippos: Towards Language Modeling with State Space Models, Stanford University and State University of New York at Buffalo researchers explore the expressivity gap between state space models and transformer language model attention mechanisms and propose FlashConv to improve state space model training efficiency on modern hardware.

AI Machine Learning & Data Science Research

DeepMind & Google’s ML-Based GraphCast Outperforms the World’s Best Medium-Range Weather Forecasting System

In the new paper GraphCast: Learning Skillful Medium-Range Global Weather Forecasting, a research team from DeepMind and Google presents GraphCast, a machine-learning (ML)-based weather simulator that scales well with data and can generate a 10-day forecast in under 60 seconds. GraphCast outperforms the world’s most accurate deterministic operational medium-range weather forecasting system and betters existing ML-based benchmarks.

AI Computer Vision & Graphics Machine Learning & Data Science Research

OpenAI’s Point·E: Generating 3D Point Clouds From Complex Prompts in Minutes on a Single GPU

In the new paper Point-E: A System for Generating 3D Point Clouds from Complex Prompts, An OpenAI research team presents Point·E, a system for text-conditional synthesis of 3D point clouds that leverages diffusion models to generate diverse and complex 3D shapes conditioned on complex text prompts in minutes on a single GPU.