This year, 22 Transformer-related research papers were accepted by NeurIPS, the world’s most prestigious machine learning conference. Synced has selected ten of these works to showcase the latest Transformer trends.
OpenAI’s groundbreaking GPT-3 language model paper, a no-regret learning dynamics study from Politecnico di Milano & Carnegie Mellon University, and a UC Berkeley work on data summarization have been named the NeurIPS 2020 Best Paper Award winners.
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
Google’s UK-based lab and research company DeepMind says its AlphaFold AI system has solved the protein folding problem, a grand challenge that has vexed the biology research community for half a century.
Researchers from the City University of Hong Kong and SenseTime propose a lightweight matting objective decomposition network (MODNet) that can smoothly process real-time human matting from a single input image with diverse and dynamic backgrounds.
In a new paper, researchers from Google, OpenAI, and DeepMind introduce “behaviour priors,” a framework designed to capture common movement and interaction patterns that are shared across a set of related tasks or contexts.
“Trust in AI systems is becoming, if not already, the biggest barrier for enterprises — as they start to move from exploring AI or potentially piloting or doing some proof of concept works into deploying AI into a production system”
A team from Google, University of Cambridge, DeepMind, and Alan Turing Institute have proposed a new type of Transformer dubbed Performer, based on a Fast Attention Via positive Orthogonal Random features (FAVOR+) backbone mechanism.