Category: AI

Global machine intelligence updates.

AI Machine Learning & Data Science Research

Huawei Rethinks Logical Synthesis, Proposing a Practical RL-based Approach That Achieves High Efficiency

In the new paper Rethinking Reinforcement Learning Based Logic Synthesis, a research team from Huawei Noah’s Ark Lab develops a novel reinforcement learning-based logic synthesis method to automatically recognize critical operators and produce common operator sequences that are generalizable to unseen circuits.

AI Machine Learning & Data Science Research

AI21 Labs’ Augmented Frozen Language Models Challenge Conventional Fine-Tuning Approaches Without Sacrificing Versatility

In the new paper Standing on the Shoulders of Giant Frozen Language Models, AI21 Labs researchers propose three novel methods for learning small neural modules that specialize a frozen language model to different tasks. Their compute-saving approach outperforms conventional frozen model methods and challenges fine-tuning performance without sacrificing model versatility.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Microsoft Azure Introduces i-Code: A General Framework That Enables Flexible Multimodal Representation Learning

In the new paper i-Code: An Integrative and Composable Multimodal Learning Framework, a Microsoft Azure Cognitive Services Research team presents i-Code, a self-supervised pretraining framework that enables the flexible integration of vision, speech, and language modalities and learns their vector representations in a unified manner.

AI Computer Vision & Graphics Machine Learning & Data Science Research

LSTM Is Back! A Deep Implementation of the Decades-old Architecture Challenges ViTs on Long Sequence Modelling

A research team from Rikkyo University and AnyTech Co., Ltd. examines the suitability of different inductive biases for computer vision and proposes Sequencer, an architectural alternative to ViTs that leverages long short-term memory (LSTM) rather than self-attention layers to achieve ViT-competitive performance on long sequence modelling.

AI Machine Learning & Data Science Research

Tsinghua U & BAAI’s CogView2 Achieves SOTA Competitive Text-to-Image Generation With 10x Speedups

In the new paper CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers, Tsinghua University and the Beijing Academy of Artificial Intelligence researchers pretrain a Cross-Modal general Language Model (CogLM) for text and image token prediction and finetune it for fast super-resolution. The resulting CogView2 hierarchical text-to-image system achieves significant speedups while generating images with better quality at comparable resolutions.

AI Machine Learning & Data Science Research

Northeastern U & Microsoft Expand StyleGAN’s Latent Space to Surpass the SOTA on Real Face Semantic Editing

In the new paper Expanding the Latent Space of StyleGAN for Real Face Editing, a research team from Northeastern University and Microsoft presents a novel two-branch method that expands the latent space of StyleGAN to enable identity-preserving and disentangled-attribute editing for real face images. The proposed approach achieves both qualitative and quantitative improvements over state-of-the-art methods.

AI Machine Learning & Data Science Nature Language Tech Research

Adobe’s UDoc Captures Cross-Modal Correlations in a Unified Pretraining Framework to Improve Document Understanding

In the new paper Unified Pretraining Framework for Document Understanding, an Adobe Research and Adobe Document Cloud team presents a unified pretraining framework for document understanding that enables cross-modal connections, relevant information highlighting in both visual and textual modalities, and cross-modal connections. UDoc achieves impressive performance on various downstream tasks.

AI Machine Learning & Data Science Research

UTokyo’s Novel Self-Blended Images Approach Achieves SOTA Results in Deepfake Detection

A research team from the University of Tokyo addresses the challenge of deepfake detection in their new paper Detecting Deepfakes with Self-Blended Images, proposing self-blended images (SBIs), a novel synthetic training data approach that outperforms state-of-the-art methods on unseen manipulations and scenes for deepfake detection tasks.

AI Machine Learning & Data Science Research

DeepMind, Mila & Google Brain Enable Generalization Capabilities for Causal Graph Structure Induction

A research team from DeepMind, Mila – University of Montreal and Google Brain proposes a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs, making causal induction a black-box problem that generalizes well to new synthetic and naturalistic graphs.

AI Computer Vision & Graphics Machine Learning & Data Science Research

UC Berkeley & Intel’s Photorealistic Denoising Method Boosts Video Quality on Moonless Nights

In the new paper Dancing Under the Stars: Video Denoising in Starlight, a research team from UC Berkeley and Intel Labs leverages a GAN-tuned, physics-based noise model to represent camera noise under low light conditions and trains a novel denoiser that, for the first time, achieves photorealistic video denoising in starlight.

AI Machine Learning & Data Science Popular Research

Toward Self-Improving Neural Networks: Schmidhuber Team’s Scalable Self-Referential Weight Matrix Learns to Modify Itself

In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential weight matrix (SRWM) that leverages outer products and the delta update rule to update and improve itself.

AI Machine Learning & Data Science Research

Alibaba’s USI: A Unified Scheme for Training Any Backbone on ImageNet That Delivers Top Results Without Hyperparameter Tuning

In the new paper Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results, a research team from Alibaba Group’s DAMO Academy introduces USI (Unified Scheme for ImageNet), a unified scheme for training any backbone on ImageNet that does not require adjustments or hyperparameter tuning between different models, and consistently yields top model results in terms of accuracy and efficiency.

AI Machine Learning & Data Science Research

OpenAI’s unCLIP Text-to-Image System Leverages Contrastive and Diffusion Models to Achieve SOTA Performance

In the new paper Hierarchical Text-Conditional Image Generation with CLIP Latents, an OpenAI research team combines the advantages of contrastive and diffusion models for text-conditional image generation tasks. Their proposed unCLIP model improves image diversity with minimal loss in photorealism and caption similarity, and produces image quality comparable to the state-of-the-art text-to-image system GLIDE.

AI Machine Learning & Data Science Research

Google Builds Language Models with Socratic Dialogue to Improve Zero-Shot Multimodal Reasoning Capabilities

In the new paper Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language, Google researchers argue that the diversity of different foundation models is symbiotic and that it is possible to build a framework that uses structured Socratic dialogue between pre-existing foundation models to formulate new multimodal tasks as a guided exchange between the models without additional finetuning.

AI Machine Learning & Data Science Research

EPFL’s Multi-modal Multi-task Masked Autoencoder: A Simple, Flexible and Effective ViT Pretraining Strategy Applicable to Any RGB Dataset

The Swiss Federal Institute of Technology Lausanne (EPFL) presents Multi-modal Multi-task Masked Autoencoders (MultiMAE), a simple and effective pretraining strategy that enables masked autoencoding to include multiple modalities and tasks and is applicable to any RGB dataset.

AI Machine Learning & Data Science Nature Language Tech Research

Training Compute-Optimal Large Language Models: DeepMind’s 70B Parameter Chinchilla Outperforms 530B Parameter Megatron-Turing

In the new paper Training Compute-Optimal Large Language Models, a DeepMind research team posits that current large language models are significantly undertrained and, based on empirical outcomes of over 400 training runs, proposes three predictive approaches for optimally setting model size and training duration.

AI Machine Learning & Data Science Research

CMU & Google Extend Pretrained Models to Thousands of Underrepresented Languages Without Using Monolingual Data

A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to low-resource languages. Their resulting Lexicon-based Adaptation approach produces consistent performance improvements without requiring additional monolingual text.

AI Machine Learning & Data Science Nature Language Tech Research

Google, NYU & Maryland U’s Token-Dropping Approach Reduces BERT Pretraining Time by 25%

In the new paper Token Dropping for Efficient BERT Pretraining, a research team from Google, New York University, and the University of Maryland proposes a simple but effective “token dropping” technique that significantly reduces the pretraining cost of transformer models such as BERT without hurting performance on downstream fine-tuning tasks.