Waymo and Google researchers’ new paper A Polynomial Expansion Perspective of Classification Loss Functions presents PolyLoss, a novel and simple framework that redesigns loss functions as a linear combination of polynomial functions that can be tailored to different target tasks and datasets.
In the new paper Expanding the Latent Space of StyleGAN for Real Face Editing, a research team from Northeastern University and Microsoft presents a novel two-branch method that expands the latent space of StyleGAN to enable identity-preserving and disentangled-attribute editing for real face images. The proposed approach achieves both qualitative and quantitative improvements over state-of-the-art methods.
A research team from BIGO Technology and iQIYI Inc. presents ClothFormer, a novel video virtual try-on framework that preserves clothes’ and humans’ features and details to generate realistic and temporally smooth try-on videos that surpass the outputs of current state-of-the-art virtual try-on systems by a large margin.
In the new paper Unified Pretraining Framework for Document Understanding, an Adobe Research and Adobe Document Cloud team presents a unified pretraining framework for document understanding that enables cross-modal connections, relevant information highlighting in both visual and textual modalities, and cross-modal connections. UDoc achieves impressive performance on various downstream tasks.
A research team from the University of Tokyo addresses the challenge of deepfake detection in their new paper Detecting Deepfakes with Self-Blended Images, proposing self-blended images (SBIs), a novel synthetic training data approach that outperforms state-of-the-art methods on unseen manipulations and scenes for deepfake detection tasks.
In the new paper PP-Matting: High-Accuracy Natural Image Matting, a Baidu research team proposes PP-Matting, a trimap-free architecture that combines a high-resolution detail branch and a semantic context branch to achieve state-of-the-art performance on natural image matting tasks.
A research team from DeepMind, Mila – University of Montreal and Google Brain proposes a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs, making causal induction a black-box problem that generalizes well to new synthetic and naturalistic graphs.
In the new paper Dancing Under the Stars: Video Denoising in Starlight, a research team from UC Berkeley and Intel Labs leverages a GAN-tuned, physics-based noise model to represent camera noise under low light conditions and trains a novel denoiser that, for the first time, achieves photorealistic video denoising in starlight.
In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential weight matrix (SRWM) that leverages outer products and the delta update rule to update and improve itself.
In the new paper DeepDPM: Deep Clustering With an Unknown Number of Clusters, a research team from the Ben-Gurion University of the Negev presents DeepDPM, an effective deep nonparametric approach that removes the need to predefine the number of clusters in clustering tasks and can infer it instead.
In the new paper Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results, a research team from Alibaba Group’s DAMO Academy introduces USI (Unified Scheme for ImageNet), a unified scheme for training any backbone on ImageNet that does not require adjustments or hyperparameter tuning between different models, and consistently yields top model results in terms of accuracy and efficiency.
In the new paper Hierarchical Text-Conditional Image Generation with CLIP Latents, an OpenAI research team combines the advantages of contrastive and diffusion models for text-conditional image generation tasks. Their proposed unCLIP model improves image diversity with minimal loss in photorealism and caption similarity, and produces image quality comparable to the state-of-the-art text-to-image system GLIDE.
In the new paper Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language, Google researchers argue that the diversity of different foundation models is symbiotic and that it is possible to build a framework that uses structured Socratic dialogue between pre-existing foundation models to formulate new multimodal tasks as a guided exchange between the models without additional finetuning.
The Swiss Federal Institute of Technology Lausanne (EPFL) presents Multi-modal Multi-task Masked Autoencoders (MultiMAE), a simple and effective pretraining strategy that enables masked autoencoding to include multiple modalities and tasks and is applicable to any RGB dataset.
A Google Research team further explores the scaling approach for improving language modelling, leveraging the new Pathways distributed ML system to train a 540 billion parameter autoregressive transformer, Pathways Language Model (PaLM), that achieves state-of-the-art few-shot performance.
In the new paper Training Compute-Optimal Large Language Models, a DeepMind research team posits that current large language models are significantly undertrained and, based on empirical outcomes of over 400 training runs, proposes three predictive approaches for optimally setting model size and training duration.
Researchers from Cash App Labs introduce simple modifications to the Very Deep Variational Autoencoder (VAE) that speedup convergence by 2.6x, save up to 20x in memory, and improve stability during training. Their modified VDVAE achieves state-of-the-art performance on seven commonly used image datasets.
A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to low-resource languages. Their resulting Lexicon-based Adaptation approach produces consistent performance improvements without requiring additional monolingual text.
In the new paper Token Dropping for Efficient BERT Pretraining, a research team from Google, New York University, and the University of Maryland proposes a simple but effective “token dropping” technique that significantly reduces the pretraining cost of transformer models such as BERT without hurting performance on downstream fine-tuning tasks.
Researchers from IBM Quantum propose a quantum algorithm for sampling from distributions that can be both complicated and useful, applying the algorithm to perform Markov Chain Monte Carlo (MCMC) iterative sampling on the Boltzmann distribution of classical Ising models.
A Microsoft Research team proposes FocalNet (Focal Modulation Network), a simple and attention-free architecture designed to replace transformers’ self-attention module. FocalNets exhibit significant superiority over self-attention for effective and efficient visual modelling in real-world applications.
A Tsinghua University research team proposes Stochastic Scheduled SAM (SS-SAM), a novel and efficient DNN training scheme that achieves comparable or better model training performance with much lower computation cost compared to baseline sharpness-aware minimization (SAM) training schema.
A DeepMind research team argues that the mathematical description of symmetries in group theory is an important foundation that determines the structure of the universe, constrains the nature of natural tasks, and consequently shapes both biological and artificial intelligence. The study proposes symmetry transformations as a fundamental principle for defining what makes good representations.
A Google research team addresses conventional transformers’ resource-heavy training and fine-tuning requirements for learning new knowledge, proposing Memorizing Transformers as a step toward language models that can simply read and memorize new data at inference time for immediate knowledge acquisition.
A team from Google Research and the Swiss AI Lab IDSIA proposes the Block-Recurrent Transformer, a novel long-sequence processing approach that has the same computation time and parameter count costs as a conventional transformer layer but achieves significant perplexity improvements in language modelling tasks over very long sequences.
Researchers from Meta AI and the State University of New York at Buffalo propose sparsely-activated all-MLP architectures (sMLPs) that achieve training efficiency improvements of up to 2x compared to transformer-based mixture-of-experts (MoE) architectures, transformers, and gMLP.
In the new paper Deep AutoAugment, a research team from Michigan State University and Amazon Web Services proposes Deep AutoAugment (DeepAA), a fully automated multi-layer data augmentation search method that eliminates the need for hand-crafted default transformations.
A research team from the National University of Singapore, HPC-AI Technology Inc., Helixon and Shanghai Jiao Tong University proposes FastFold, a highly efficient protein structure prediction model for training and inference that reduces AlphaFold 2’s training time from 11 days to 67 hours.
An Idiap Research Institute team proposes a novel multi-layer perceptron (MLP) model, HyperMixer, as a Green AI alternative to transformers. HyperMixer achieves comparable performance with substantially lower costs in terms of processing time, training data and hyperparameter tuning.
A research team from DeepMind, Ca’ Foscari University of Venice, University of Oxford and Athens University of Economics and Business introduces Ithaca, a deep neural network (DNN) designed for textual restoration and geographical and chronological attribution of ancient Greek inscriptions.
In the new paper Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer, Microsoft and OpenAI researchers propose µTransfer, a method that leverages Maximal Update Parametrization (µP) to zero-shot transfer hyperparameters from small models and obtain near-optimal parameters on large models without directly tuning them.
In the new paper AutoDIME: Automatic Design of Interesting Multi-Agent Environments, an OpenAI research team explores automatic environment design for multi-agent environments using an RL-trained teacher that samples environments to maximize student learning. The work demonstrates that intrinsic teacher rewards are a promising approach for automating both single and multi-agent environment design.
In the new paper Learning Robust Real-Time Cultural Transmission Without Human Data, a DeepMind research team proposes a procedure for training artificially intelligent agents capable of flexible, high-recall, robust real-time cultural transmission from human co-players in a rich 3D physical simulation without using human data in the training pipeline.
A research team from the University of Washington, UC San Diego and Microsoft prototypes Tensor Query Processor (TQP), a query processor that runs atop tensor computation runtimes (TCRs) such as PyTorch, TVM, and ONNX Runtime, improving query execution time by up to 20x over CPU-only systems and up to 5x over specialized GPU solutions.