This year, NeurIPS is hosting two workshops dedicated to self-supervised learning: Self-Supervised Learning for Speech and Audio Processing on Friday, December 11; and Self-Supervised Learning — Theory and Practice on Saturday, December 12.
OpenAI’s groundbreaking GPT-3 language model paper, a no-regret learning dynamics study from Politecnico di Milano & Carnegie Mellon University, and a UC Berkeley work on data summarization have been named the NeurIPS 2020 Best Paper Award winners.
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
Google’s UK-based lab and research company DeepMind says its AlphaFold AI system has solved the protein folding problem, a grand challenge that has vexed the biology research community for half a century.
The CoRL 2020 Best System Paper Award was presented today to Huawei Noah’s Ark Lab, Shanghai Jiao Tong University and University College London for their paper SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving.
New Spoken Language Understanding (SLU) research from MIT CSAIL and Amazon AI introduces step-skipping semi-supervised frameworks that take speech as input and achieve performance competitive to systems leveraging oracle text.
“Our research provides enriched AR user experiences by enabling a more fine-grained visual recognition feature in AR, which is desirable in a wide range of application scenarios including technical support,” IBM researchers say.