In the new paper An Improved One millisecond Mobile Backbone, an Apple research team presents MobileOne, a novel mobile backbone that cuts inference time to under one millisecond on an iPhone12 and reaches 75.9 percent top-1 accuracy on ImageNet.
In the new paper Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks, a research team from the Allen Institute for AI and the University of Washington introduces UNIFIED-IO, a neural model that achieves strong performance across a wide variety of vision, language, and multi-modal tasks without task- or modality-specific branches or fine-tuning.
In the new paper Evolution Through Large Models, an OpenAI research team shows that large-scale language models (LLMs) trained to generate modern programming language can suggest intelligent mutations that can be leveraged to realize dramatically improved mutation operators for genetic programming.
In the new paper GoodBye WaveNet — A Language Model for Raw Audio with Context of 1/2 Million Samples, Stanford University researcher Prateek Verma presents a generative auto-regressive architecture that models audio waveforms over contexts greater than 500,000 samples and outperforms state-of-the-art WaveNet baselines.
In the new paper Large-Scale Retrieval for Reinforcement Learning, a DeepMind research team dramatically expands the information accessible to reinforcement learning (RL) agents, enabling them to attend to tens of millions of information pieces, incorporate new information without retraining, and learn decision making in an end-to-end manner.
In the new paper LegoNN: Building Modular Encoder-Decoder Models, Meta AI researchers propose LegoNN, a procedure for building encoder-decoder architectures with decoder modules that can be shared across different tasks without finetuning or significant performance reductions.
In the new paper VCT: A Video Compression Transformer, a Google Research team presents an elegantly simple but powerful video compression transformer (VCT) that does not require architectural biases and priors and learns totally from data without any hand-crafting. VCT is easy to implement and outperforms conventional video compression approaches.
In the new paper Toward a Realistic Model of Speech Processing in the Brain with Self-supervised Learning, researchers show that self-supervised architectures such as Wav2Vec 2.0 can learn brain-like representations from as little as 600 hours of unlabelled speech; and can also learn sound-generic and speech- and language-specific representations similar to those of the prefrontal and temporal cortices.
In the new paper Beyond the Imitation Game: Quantifying and Extrapolating the Capabilities of Language Models, 444 authors from 132 institutions introduce Beyond the Imitation Game (BIG-bench), a large-scale, extremely difficult and diverse benchmark that includes 204 tasks for predicting the potentially transformative effects of large language models.
In the new paper Neural Diffusion Processes, a research team from the University of Cambridge, Secondmind, and Google Research presents Neural Diffusion Processes (NDPs), a novel framework that learns to sample from rich distributions over functions at a lower computational cost than the true Bayesian posterior of a conventional Gaussian process.
In the new paper Is a Modular Architecture Enough?, a research team from Mila and the Université de Montréal conducts a rigorous and thorough quantitative assessment of common modular architectures that reveals the benefits of modularity and sparsity for deep neural networks and the sub-optimality of existing end-to-end learned modular systems.
In the new paper Extreme Compression for Pre-trained Transformers Made Simple and Efficient, a Microsoft research team introduces XTC, a simple yet effective extreme compression pipeline for pretrained transformers that can achieve state-of-the-art results while reducing model size by 50x.
In the new paper Rare Gems: Finding Lottery Tickets at Initialization, a research team from Carnegie Mellon University, MBZUAI, Petuum, Inc and the University of Wisconsin-Madison proposes GEM-MINER, an algorithm that finds sparse subnetworks at initialization trainable to accuracy that is comparable or better than iterative magnitude pruning (IMP) with warm-up.
The BAAI Conference 2022 kicked off at 9:00 am on May 31 in Beijing and ran through June 2. AI experts, industry leaders, young talents and international delegates joined the virtual gathering and live stream for three busy days of high-level keynotes, tech talks, parallel forums and networking.
In the new paper Factory: Fast Contact for Robotic Assembly, a research team from NVIDIA and the University of Washington introduces Factory, a set of physics simulation methods and robot learning tools for simulating contact-rich interactions in assembly with high accuracy, efficiency, and robustness.
In the new paper UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes, a Google Brain research team proposes UViM, a unified approach that leverages language modelling and discrete representation learning to enable the modelling of a wide range of computer vision tasks without task-specific modifications.
In the new paper Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, a Google Brain research team presents Imagen, a text-to-image diffusion model that combines deep language understanding and photorealistic image generation capabilities to achieve a new state-of-the-art FID score of 7.27 on the COCO dataset.
In the new paper Tracing Knowledge in Language Models Back to the Training Data, a team from MIT CSAIL and Google Research proposes a benchmark for tracing language models’ assertions to the associated training data, aiming to establish a principled ground truth and mitigate high compute demands for large neural language model training.
In the new paper Large Language Models are Zero-Shot Reasoners, a research team from the University of Tokyo and Google Brain demonstrates that large language models (LLMs) can become good zero-shot reasoners through the addition of a simple prompt — “Let’s think step by step” — that elicits a step-by-step thinking process before each question is answered. Their Zero-shot-CoT model achieves huge performance gains compared to the zero-shot baseline.
In the new paper Automated Crossword Solving, researchers from UC Berkeley and Matthew Ginsberg LLC present the Berkeley Crossword Solver (BCS), an end-to-end state-of-the-art system for automatically solving challenging crossword puzzles that captured first place in the American Crossword Puzzle Tournament.
In the new paper Masked Autoencoders As Spatiotemporal Learners, a Meta AI research team extends masked autoencoders (MAE) to spatiotemporal representation learning for video. The novel approach introduces negligible inductive biases on space-time while achieving strong empirical results compared to vision transformers (ViTs) and outperforms supervised pretraining by large margins.
In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for scaling implicit neural representations (INRs). The resulting meta-learning sparse compression networks can represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes, and scenes, achieving state-of-the-art results on some of them.
In the new paper Rethinking Reinforcement Learning Based Logic Synthesis, a research team from Huawei Noah’s Ark Lab develops a novel reinforcement learning-based logic synthesis method to automatically recognize critical operators and produce common operator sequences that are generalizable to unseen circuits.
In the new paper Standing on the Shoulders of Giant Frozen Language Models, AI21 Labs researchers propose three novel methods for learning small neural modules that specialize a frozen language model to different tasks. Their compute-saving approach outperforms conventional frozen model methods and challenges fine-tuning performance without sacrificing model versatility.
In the new paper Quantum Self-Attention Neural Networks for Text Classification, a team from Baidu Research and the University of Technology Sydney proposes the quantum self-attention neural network (QSANN), a simple yet powerful architecture that is effective and scalable to large real-world datasets.
In the new paper Unifying Language Learning Paradigms, a Google Research/Brain team proposes a framework for pretraining universal language models that are effective across many different tasks. Their 20B parameter model surpasses 175B GPT-3 on the zero-shot SuperGLUE benchmark and triples the performance of T5-XXL on one-shot summarization tasks.
In the new paper i-Code: An Integrative and Composable Multimodal Learning Framework, a Microsoft Azure Cognitive Services Research team presents i-Code, a self-supervised pretraining framework that enables the flexible integration of vision, speech, and language modalities and learns their vector representations in a unified manner.
A research team from Rikkyo University and AnyTech Co., Ltd. examines the suitability of different inductive biases for computer vision and proposes Sequencer, an architectural alternative to ViTs that leverages long short-term memory (LSTM) rather than self-attention layers to achieve ViT-competitive performance on long sequence modelling.
In the new paper A Probabilistic Interpretation of Transformers, ML Collective researcher Alexander Shim provides a probabilistic explanation of transformers’ exponential dot product attention and contrastive learning based on distributions of the exponential family.
In the new technical report OPT: Open Pre-trained Transformer Language Models, Meta AI open-sources OPT, a suite of decoder-only pretrained transformers ranging from 125M to 175B parameters. The release will enable more researchers to work with large-scale language models to drive the field forward.
In the new paper CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers, Tsinghua University and the Beijing Academy of Artificial Intelligence researchers pretrain a Cross-Modal general Language Model (CogLM) for text and image token prediction and finetune it for fast super-resolution. The resulting CogView2 hierarchical text-to-image system achieves significant speedups while generating images with better quality at comparable resolutions.
In the new paper Flamingo: a Visual Language Model for Few-Shot Learning, a DeepMind research team presents Flamingo, a novel family of visual language models (VLMs) that can handle multimodal tasks such as captioning, visual dialogue, classification and visual question answering when given only a few input/output samples.
Waymo and Google researchers’ new paper A Polynomial Expansion Perspective of Classification Loss Functions presents PolyLoss, a novel and simple framework that redesigns loss functions as a linear combination of polynomial functions that can be tailored to different target tasks and datasets.
In the new paper Expanding the Latent Space of StyleGAN for Real Face Editing, a research team from Northeastern University and Microsoft presents a novel two-branch method that expands the latent space of StyleGAN to enable identity-preserving and disentangled-attribute editing for real face images. The proposed approach achieves both qualitative and quantitative improvements over state-of-the-art methods.
A research team from BIGO Technology and iQIYI Inc. presents ClothFormer, a novel video virtual try-on framework that preserves clothes’ and humans’ features and details to generate realistic and temporally smooth try-on videos that surpass the outputs of current state-of-the-art virtual try-on systems by a large margin.