Google & DeepMind Unify Normalization and Activation Layers, Discover ‘EvoNorms’
Researchers unify DNN normalization layers and activation functions into a single computation graph.
AI Technology & Industry Review
Researchers unify DNN normalization layers and activation functions into a single computation graph.
Researchers have proposed a new and inexpensive method for automatically generating yuru-chara characters.
Researchers introduce the notion of deflecting adversarial attacks, which presents a step towards ending the battle between attacks and defenses.
Just as biologists gain insights into organisms by putting model specimens under their microscopes, AI Microscope was designed to help researchers analyze the features that form inside leading CV models.
XTREME, a multi-task benchmark that evaluates cross-lingual generalization capabilities of multilingual representations across 40 languages and nine tasks.
In a bid to generate high-resolution images showing realistic daytime changes while keeping accurate scene semantics, researchers have proposed a novel image-to-image translation model, HiDT (High Resolution Daytime Translation).
Researchers from Facebook AI introduce a novel low-dimensional design space, RegNet, which produces simple, fast and versatile networks.
Covid-Sanity, a web interface designed to navigate the flood of bioRxiv and medRxiv COVID-19 papers and make the research within more searchable and sortable.
Respected journal Science Magazine has published COVID-19 research that identifies the viral entry attachment stage for the new coronavirus.
Researchers looks at current studies that are using AI to tackle the COVID-19 crisis and suggests some promising future research directions.
Researchers from the University of Chicago Oriental Institute (OI) and the Department of Computer Science have introduced an artificial intelligence tool called DeepScribe designed to read cuneiform tablets from 25 centuries ago.
Researchers have introduced a novel hybrid continual learning algorithm, Adversarial Continual Learning, which aims to enable the persistent explicit or implicit replay of experiences by storing original samples
A research team from MIT, Adobe Research, and Shanghai Jiao Tong University have introduced a novel method for reducing the cost and size of Conditional GAN generators.
Researchers from Google Brain Tokyo and Google Japan have proposed a novel approach that helps guide reinforcement learning (RL) agents to what’s important in vision-based tasks.
Researchers investigate how different ImageNet models affect transfer accuracy on domain adaptation problems.
A research team has proposed non-contrast thoracic chest CT scans as an effective tool for detecting, quantifying, and tracking COVID-19.
Researchers have proposed a new image generative model that leverages the hierarchical space of deep features learned by pretrained classification networks and provides a unified and versatile framework for image generation and manipulation tasks.
Researchers from Bocconi University have prepared an online overview of the commonalities and differences between language-specific BERT models and mBERT.
A new study suggests that VSR models could perform even better if they used additional available visual information.
The earliest evidence of China’s recorded history is found in the Shang dynasty (~1600 to 1046 BC), and this hasContinue Reading
The model outperforms existing methods in image manipulation and offers researchers a possible solution to the scarcity of paired datasets.
Researchers proposed a new training scheme that targets this bias by controlling and exposing textural information slowly through the training process.
Researchers from the Berkeley Artificial Intelligence Research (BAIR) Lab at UC Berkeley explored the effect of Transformer model size on training and inference efficiency.
Researchers proposed an automatic structured pruning framework, AutoCompress, which adopts the 2018 ADMM-based weight pruning algorithm and outperforms previous automatic model compression methods while maintaining high accuracy.
A new study leverages an established AI-based drug discovery pipeline to produce molecular structures as part of the widening fight against the 2019-nCoV outbreak.
Researchers propose a flexible GNN benchmarking framework that can also accommodate the needs of researchers to add new datasets and models.
Proposed by researchers from the Rutgers University and Samsung AI Center in the UK, CookGAN uses an attention-based ingredients-image association model to condition a generative neural network tasked with synthesizing meal images.
The paper acceptance rate fell to approximately 22 percent from 25 percent in 2019 and 29.6 percent in 2018.
Researchers propose a novel model compression approach to effectively compress BERT by progressive module replacing.
A Google-led research team has introduced a new method for optimizing neural network parameters that is faster than all common first-order methods on complex problems.
In an attempt to equip the TF-IDF-based retriever with a state-of-the-art neural reading comprehension model, researchers introduced a new graph-based recurrent retrieval approach.
Researchers have proposed a novel self-adversarial learning (SAL) paradigm for improving GANs’ performance in text generation.
Bayesian inference meanwhile leverages Bayes’ theorem to update the probability of a hypothesis as additional data becomes available. How can Bayesian inference benefit deep learning models?
Researchers from Italy’s University of Pisa present a clear and engaging tutorial on the main concepts and building blocks involved in neural architectures for graphs.
Researchers have proposed a simple but powerful “SimCLR” framework for contrastive learning of visual representations.
A recent Google Brain paper looks into Google’s hugely successful transformer network — BERT — and how it represents linguistic information internally.
Google teamed up with researchers from Synthesis AI and Columbia University to introduce a deep learning approach called ClearGrasp as a first step to teaching machines how to “see” transparent materials.
Researchers from Google Brain and Carnegie Mellon University have released models trained with a semi-supervised learning method called “Noisy Student” that achieve 88.4 percent top-1 accuracy on ImageNet.
Deep learning models are getting larger and larger to meet the demand for better and better performance. Meanwhile, the timeContinue Reading
Researchers introduced semantic region-adaptive normalization (SEAN), a simple but effective building block for conditional Generative Adversarial Networks (cGAN).





































