Google Brain’s Switch Transformer language model packs a whopping 1.6 trillion parameters while effectively controlling computational cost. The model achieved a 4x pretraining speedup over a strongly tuned T5-XXL baseline.
This is the fourth Synced year-end compilation of “Artificial Intelligence Failures.” Our aim is not to shame nor downplay AI research, but to look at where and how it has gone awry with the hope that we can create better AI systems in the future.
In the new paper Canonical Capsules: Unsupervised Capsules in Canonical Pose, Turing Award Honoree Dr. Geoffrey Hinton and a team of researchers propose an architecture for unsupervised learning with 3D point clouds based on capsules.
This year, 22 Transformer-related research papers were accepted by NeurIPS, the world’s most prestigious machine learning conference. Synced has selected ten of these works to showcase the latest Transformer trends.
OpenAI’s groundbreaking GPT-3 language model paper, a no-regret learning dynamics study from Politecnico di Milano & Carnegie Mellon University, and a UC Berkeley work on data summarization have been named the NeurIPS 2020 Best Paper Award winners.
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
Google’s UK-based lab and research company DeepMind says its AlphaFold AI system has solved the protein folding problem, a grand challenge that has vexed the biology research community for half a century.
Researchers from the City University of Hong Kong and SenseTime propose a lightweight matting objective decomposition network (MODNet) that can smoothly process real-time human matting from a single input image with diverse and dynamic backgrounds.
In a new paper, researchers from Google, OpenAI, and DeepMind introduce “behaviour priors,” a framework designed to capture common movement and interaction patterns that are shared across a set of related tasks or contexts.
“Trust in AI systems is becoming, if not already, the biggest barrier for enterprises — as they start to move from exploring AI or potentially piloting or doing some proof of concept works into deploying AI into a production system”