How to speedup 31*31 conv 10 times
MegEngine Teams propose large-kernel convolution optimization strategy, speeding up 31*31 convolutional neural networks 10 times.
AI Technology & Industry Review
Technical review of the newest machine intelligence research.
MegEngine Teams propose large-kernel convolution optimization strategy, speeding up 31*31 convolutional neural networks 10 times.
In the new paper Automated Crossword Solving, researchers from UC Berkeley and Matthew Ginsberg LLC present the Berkeley Crossword Solver (BCS), an end-to-end state-of-the-art system for automatically solving challenging crossword puzzles that captured first place in the American Crossword Puzzle Tournament.
In the new paper Masked Autoencoders As Spatiotemporal Learners, a Meta AI research team extends masked autoencoders (MAE) to spatiotemporal representation learning for video. The novel approach introduces negligible inductive biases on space-time while achieving strong empirical results compared to vision transformers (ViTs) and outperforms supervised pretraining by large margins.
In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for scaling implicit neural representations (INRs). The resulting meta-learning sparse compression networks can represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes, and scenes, achieving state-of-the-art results on some of them.
In the new paper Rethinking Reinforcement Learning Based Logic Synthesis, a research team from Huawei Noah’s Ark Lab develops a novel reinforcement learning-based logic synthesis method to automatically recognize critical operators and produce common operator sequences that are generalizable to unseen circuits.
In the new paper Productivity Assessment of Neural Code Completion, a GitHub research team explores whether usage measurements of developer interactions with GitHub Copilot can predict productivity as reported by developers.
A DeepMind research team proposes Gato, a single general-purpose transformer sequence model that can engage in dialogue, caption images, stack blocks with a real robot arm, navigate in simulated 3D environments and even beat human players at Atari games.
In the new paper Standing on the Shoulders of Giant Frozen Language Models, AI21 Labs researchers propose three novel methods for learning small neural modules that specialize a frozen language model to different tasks. Their compute-saving approach outperforms conventional frozen model methods and challenges fine-tuning performance without sacrificing model versatility.
In the new paper Quantum Self-Attention Neural Networks for Text Classification, a team from Baidu Research and the University of Technology Sydney proposes the quantum self-attention neural network (QSANN), a simple yet powerful architecture that is effective and scalable to large real-world datasets.
In the new paper Unifying Language Learning Paradigms, a Google Research/Brain team proposes a framework for pretraining universal language models that are effective across many different tasks. Their 20B parameter model surpasses 175B GPT-3 on the zero-shot SuperGLUE benchmark and triples the performance of T5-XXL on one-shot summarization tasks.
In the new paper Building Machine Translation Systems for the Next Thousand Languages, a Google Research team proposes a practical machine translation (MT) system that can translate over one thousand languages, including both high-resource and low-resource languages.
In the new paper i-Code: An Integrative and Composable Multimodal Learning Framework, a Microsoft Azure Cognitive Services Research team presents i-Code, a self-supervised pretraining framework that enables the flexible integration of vision, speech, and language modalities and learns their vector representations in a unified manner.
A research team from Rikkyo University and AnyTech Co., Ltd. examines the suitability of different inductive biases for computer vision and proposes Sequencer, an architectural alternative to ViTs that leverages long short-term memory (LSTM) rather than self-attention layers to achieve ViT-competitive performance on long sequence modelling.
In the new paper A Probabilistic Interpretation of Transformers, ML Collective researcher Alexander Shim provides a probabilistic explanation of transformers’ exponential dot product attention and contrastive learning based on distributions of the exponential family.
In the new technical report OPT: Open Pre-trained Transformer Language Models, Meta AI open-sources OPT, a suite of decoder-only pretrained transformers ranging from 125M to 175B parameters. The release will enable more researchers to work with large-scale language models to drive the field forward.
In the new paper CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers, Tsinghua University and the Beijing Academy of Artificial Intelligence researchers pretrain a Cross-Modal general Language Model (CogLM) for text and image token prediction and finetune it for fast super-resolution. The resulting CogView2 hierarchical text-to-image system achieves significant speedups while generating images with better quality at comparable resolutions.
In the new paper Flamingo: a Visual Language Model for Few-Shot Learning, a DeepMind research team presents Flamingo, a novel family of visual language models (VLMs) that can handle multimodal tasks such as captioning, visual dialogue, classification and visual question answering when given only a few input/output samples.
Waymo and Google researchers’ new paper A Polynomial Expansion Perspective of Classification Loss Functions presents PolyLoss, a novel and simple framework that redesigns loss functions as a linear combination of polynomial functions that can be tailored to different target tasks and datasets.
In the new paper Expanding the Latent Space of StyleGAN for Real Face Editing, a research team from Northeastern University and Microsoft presents a novel two-branch method that expands the latent space of StyleGAN to enable identity-preserving and disentangled-attribute editing for real face images. The proposed approach achieves both qualitative and quantitative improvements over state-of-the-art methods.
A research team from BIGO Technology and iQIYI Inc. presents ClothFormer, a novel video virtual try-on framework that preserves clothes’ and humans’ features and details to generate realistic and temporally smooth try-on videos that surpass the outputs of current state-of-the-art virtual try-on systems by a large margin.
In the new paper Unified Pretraining Framework for Document Understanding, an Adobe Research and Adobe Document Cloud team presents a unified pretraining framework for document understanding that enables cross-modal connections, relevant information highlighting in both visual and textual modalities, and cross-modal connections. UDoc achieves impressive performance on various downstream tasks.
A research team from the University of Tokyo addresses the challenge of deepfake detection in their new paper Detecting Deepfakes with Self-Blended Images, proposing self-blended images (SBIs), a novel synthetic training data approach that outperforms state-of-the-art methods on unseen manipulations and scenes for deepfake detection tasks.
A DeepMind research team presents a framework for the fine-grained analysis of various distributions shifts and provides insights on when and why we can expect models to successfully generalize.
In the new paper PP-Matting: High-Accuracy Natural Image Matting, a Baidu research team proposes PP-Matting, a trimap-free architecture that combines a high-resolution detail branch and a semantic context branch to achieve state-of-the-art performance on natural image matting tasks.
A research team from DeepMind, Mila – University of Montreal and Google Brain proposes a neural network architecture that learns the graph structure of observational and/or interventional data via supervised training on synthetic graphs, making causal induction a black-box problem that generalizes well to new synthetic and naturalistic graphs.
In the new paper Dancing Under the Stars: Video Denoising in Starlight, a research team from UC Berkeley and Intel Labs leverages a GAN-tuned, physics-based noise model to represent camera noise under low light conditions and trains a novel denoiser that, for the first time, achieves photorealistic video denoising in starlight.
In the new paper A Modern Self-Referential Weight Matrix That Learns to Modify Itself, a research team from The Swiss AI Lab, IDSIA, University of Lugano (USI) & SUPSI, and King Abdullah University of Science and Technology (KAUST) presents a scalable self-referential weight matrix (SRWM) that leverages outer products and the delta update rule to update and improve itself.
In the new paper DeepDPM: Deep Clustering With an Unknown Number of Clusters, a research team from the Ben-Gurion University of the Negev presents DeepDPM, an effective deep nonparametric approach that removes the need to predefine the number of clusters in clustering tasks and can infer it instead.
In the new paper Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results, a research team from Alibaba Group’s DAMO Academy introduces USI (Unified Scheme for ImageNet), a unified scheme for training any backbone on ImageNet that does not require adjustments or hyperparameter tuning between different models, and consistently yields top model results in terms of accuracy and efficiency.
In the new paper Hierarchical Text-Conditional Image Generation with CLIP Latents, an OpenAI research team combines the advantages of contrastive and diffusion models for text-conditional image generation tasks. Their proposed unCLIP model improves image diversity with minimal loss in photorealism and caption similarity, and produces image quality comparable to the state-of-the-art text-to-image system GLIDE.
In the new paper Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language, Google researchers argue that the diversity of different foundation models is symbiotic and that it is possible to build a framework that uses structured Socratic dialogue between pre-existing foundation models to formulate new multimodal tasks as a guided exchange between the models without additional finetuning.
A team from the University of Maryland and Google Research proposes LilNetX, an end-to-end trainable technique for neural networks that jointly optimizes model parameters for accuracy, model size on the disk, and computation on any given task.
The Swiss Federal Institute of Technology Lausanne (EPFL) presents Multi-modal Multi-task Masked Autoencoders (MultiMAE), a simple and effective pretraining strategy that enables masked autoencoding to include multiple modalities and tasks and is applicable to any RGB dataset.
A Meta AI research team explores the plain, non-hierarchical vision transformer (ViT) as a backbone network for object detection, proposing a ViT Detector that achieves performance competitive with traditional hierarchical backbones.
A Google Research team further explores the scaling approach for improving language modelling, leveraging the new Pathways distributed ML system to train a 540 billion parameter autoregressive transformer, Pathways Language Model (PaLM), that achieves state-of-the-art few-shot performance.
Baidu researchers introduce the PP-YOLOE object detector, which outperforms last year’s YOLOX in terms of speed and accuracy trade-off. The PP-YOLOE-l variant surpasses PP-YOLOv2 by 1.9 percent AP and YOLOX-l by 1.3 percent AP on COCO datasets.
In the new paper Training Compute-Optimal Large Language Models, a DeepMind research team posits that current large language models are significantly undertrained and, based on empirical outcomes of over 400 training runs, proposes three predictive approaches for optimally setting model size and training duration.
Researchers from Cash App Labs introduce simple modifications to the Very Deep Variational Autoencoder (VAE) that speedup convergence by 2.6x, save up to 20x in memory, and improve stability during training. Their modified VDVAE achieves state-of-the-art performance on seven commonly used image datasets.
A Stanford research team proposes Time Control (TC), a language model that implicitly plans via a latent stochastic process and generates texts consistent with this latent plan to improve performance on long text generation.
A research team from Carnegie Mellon University and Google systematically explores strategies for leveraging the relatively under-studied resource of bilingual lexicons to adapt pretrained multilingual models to low-resource languages. Their resulting Lexicon-based Adaptation approach produces consistent performance improvements without requiring additional monolingual text.