This year, 22 Transformer-related research papers were accepted by NeurIPS, the world’s most prestigious machine learning conference. Synced has selected ten of these works to showcase the latest Transformer trends.
This year, NeurIPS is hosting two workshops dedicated to self-supervised learning: Self-Supervised Learning for Speech and Audio Processing on Friday, December 11; and Self-Supervised Learning — Theory and Practice on Saturday, December 12.
At AWS re:Invent, Amazon Web Services, Inc., an Amazon.com company, announced Amazon Monitron, Amazon Lookout for Equipment, the AWS Panorama Appliance, the AWS Panorama SDK, and Amazon Lookout for Vision.
OpenAI’s groundbreaking GPT-3 language model paper, a no-regret learning dynamics study from Politecnico di Milano & Carnegie Mellon University, and a UC Berkeley work on data summarization have been named the NeurIPS 2020 Best Paper Award winners.
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
Google’s UK-based lab and research company DeepMind says its AlphaFold AI system has solved the protein folding problem, a grand challenge that has vexed the biology research community for half a century.
Researchers from the City University of Hong Kong and SenseTime propose a lightweight matting objective decomposition network (MODNet) that can smoothly process real-time human matting from a single input image with diverse and dynamic backgrounds.
The CoRL 2020 Best System Paper Award was presented today to Huawei Noah’s Ark Lab, Shanghai Jiao Tong University and University College London for their paper SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving.
New Spoken Language Understanding (SLU) research from MIT CSAIL and Amazon AI introduces step-skipping semi-supervised frameworks that take speech as input and achieve performance competitive to systems leveraging oracle text.
“Our research provides enriched AR user experiences by enabling a more fine-grained visual recognition feature in AR, which is desirable in a wide range of application scenarios including technical support,” IBM researchers say.