ICLR 2020 Cancelled Over Coronavirus; Virtual Conference Arranged
Organizers announced today that the COVID-19 outbreak and associated travel restrictions have forced the cancellation of the “physical conference” ofContinue Reading
AI Technology & Industry Review
Organizers announced today that the COVID-19 outbreak and associated travel restrictions have forced the cancellation of the “physical conference” ofContinue Reading
Researchers proposed a new training scheme that targets this bias by controlling and exposing textural information slowly through the training process.
Researchers proposed an automatic structured pruning framework, AutoCompress, which adopts the 2018 ADMM-based weight pruning algorithm and outperforms previous automatic model compression methods while maintaining high accuracy.
DeepMind released the structure predictions for six proteins associated with SARS-CoV-2, the virus that causes COVID-19, using the most up-to-date version of their AlphaFold system.
Pushed by the coronavirus outbreak, in early March 2020 China’s face mask production hit 110 percent capacity, with daily output exceeding 100 million masks according to state agency statistics.
A new study leverages an established AI-based drug discovery pipeline to produce molecular structures as part of the widening fight against the 2019-nCoV outbreak.
Researchers propose a flexible GNN benchmarking framework that can also accommodate the needs of researchers to add new datasets and models.
UC Berkeley and Adobe Research have introduced a “universal” detector that can distinguish real images from generated images regardless of what architectures and/or datasets were used for training.
Proposed by researchers from the Rutgers University and Samsung AI Center in the UK, CookGAN uses an attention-based ingredients-image association model to condition a generative neural network tasked with synthesizing meal images.
The KaoKore dataset includes 5552 RGB image files drawn from the 2018 Collection of Facial Expressions dataset of cropped face images from Japanese artworks.
The paper acceptance rate fell to approximately 22 percent from 25 percent in 2019 and 29.6 percent in 2018.
The crowdsourcing produced 111.25 hours of video from 54 non-expert demonstrators to build “one of the largest, richest, and most diverse robot manipulation datasets ever collected using human creativity and dexterity.”
Fast and accurate diagnosis is critical on the front line, and now an AI-powered diagnostic assessment system is helping Hubei medical teams do just that.
In an attempt to equip the TF-IDF-based retriever with a state-of-the-art neural reading comprehension model, researchers introduced a new graph-based recurrent retrieval approach.
The proposed system is capable of searching the continental United States at 1 -meter pixel resolution, corresponding to approximately 2 billion images, in around 0.1 seconds.
MonoLayout, a practical deep neural architecture that takes just a single image of a road scene as input and outputs an amodal scene layout in bird’s-eye view.
In a bid to raise awareness of the threats posed by climate change, the Mila team recently published a paper that uses GANs to generate images of how climate events may impact our environments — with a particular focus on floods.
Joseph Redmon, creator of the popular object detection algorithm YOLO, tweeted last week that he had ceased his computer vision research to avoid enabling potential misuse of the tech.
Synced Global AI Weekly February 23rd
DeepMind announced yesterday the release of Haiku and RLax — new JAX libraries designed for neural networks and reinforcement learning respectively.
Researchers from Italy’s University of Pisa present a clear and engaging tutorial on the main concepts and building blocks involved in neural architectures for graphs.
Researchers have proposed a novel generator network specialized on the illustrations in children’s books.
The tool enables researchers to try, compare, and evaluate models to decide which work best on their datasets or for their research purposes.
Synced Global AI Weekly February 16th
Google teamed up with researchers from Synthesis AI and Columbia University to introduce a deep learning approach called ClearGrasp as a first step to teaching machines how to “see” transparent materials.
Researchers from Google Brain and Carnegie Mellon University have released models trained with a semi-supervised learning method called “Noisy Student” that achieve 88.4 percent top-1 accuracy on ImageNet.
Researchers have introduced the first unsupervised learning approach for identifying interpretable semantic directions in the latent space of generative adversarial network (GAN) models.
Deep learning models are getting larger and larger to meet the demand for better and better performance. Meanwhile, the timeContinue Reading
Researchers introduced semantic region-adaptive normalization (SEAN), a simple but effective building block for conditional Generative Adversarial Networks (cGAN).
In a bid to simplify 3D deep learning and improve processing performance and efficiency, Facebook recently introduced an open-source framework for 3D computer vision.
Synced Global AI Weekly February 9th
The crucial step now is to develop matching vaccines and drugs to uproot its existence, and China’s big tech companies have stepped up to help.
Batchboost is a simple technique to accelerate ML model training by adaptively feeding mini-batches with artificial samples which are created by mixing two examples from the previous step – in favor of pairing those that produce the difficult one.
In an effort to enrich resources for multispeaker singing-voice synthesis, a team of researchers from the University of Tokyo has developed a Japanese multispeaker singing-voice corpus.
Researchers proposed a “radioactive data” technique for subtly marking images in a dataset to help researchers later determine whether they were used to train a particular model.
In a new paper, researchers from the University of Toronto, Vector Institute, and University of Wisconsin-Madison propose SISA training, a new framework that helps models “unlearn” information by reducing number of updates that need to be computed when data points are removed.
In a new paper, researchers from the New York University and Modl.ai, a company applying machine learning to game developing, suggest that simple spacial processing methods such as rotation, translation and cropping could help increase model generality.
The tool can significantly accelerate the prediction time of a virus’s RNA secondary structure, affording frontline researchers an opportunity to better understand the virus and develop targeting vaccines in a time of crisis.
Facebook’s new HiPlot is a lightweight interactive visualization tool that takes this further, using parallel plots to discover correlations and patterns in such high-dimensional data.
A new paper from the University of Washington Seattle and the University of California, Berkeley looks at saddle points on Riemannian Manifolds. In this article Synced takes a deep dive into this important research.







































