A research team from Google Research propose a message-passing graph neural network that can explicitly model spatio-temporal relations, use either implicitly or explicitly representations of objects, and generalize previous structured models for video understanding.
A Google Research team accelerates Neural Radiance Fields’ rendering procedure for view-synthesis tasks, enabling it to work in real-time while retaining its ability to represent fine geometric details and convincing view-dependent effects.
Synced invited Dr. Linchao Zhu, a lecturer at the ReLER lab, University of Technology Sydney whose works focus on video representation learning, to share his thoughts on the paper Text-to-Image Generation Grounded by Fine-Grained User Attention.
VOGUE, an AI-powered optimization method that deforms garments according to a given body shape while preserving pattern and material details to deliver state-of-the-art photorealistic, high-resolution try-on images.
In the new paper Canonical Capsules: Unsupervised Capsules in Canonical Pose, Turing Award Honoree Dr. Geoffrey Hinton and a team of researchers propose an architecture for unsupervised learning with 3D point clouds based on capsules.