A research team from Google Brain conducts a comprehensive empirical study on more than fifty choices in a generic adversarial imitation learning framework and explores their impacts on large-scale (>500k trained agents) continuous-control tasks to provide practical insights and recommendations for designing novel and effective AIL algorithms.
Imagine the lips forming the Mona Lisa’s famous smile were to part, and she began “speaking” to you. This is not some sci-fi fantasy or a 3D face animation, it’s an effect achieved by researchers from Samsung AI lab and Skolkovo Institute of Science and Technology, who used adversarial learning to generate a photorealistic talking head model.
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. By using a generative adversarial learning framework, the method can generate high-resolution, photorealistic and temporally coherent results with various input formats, including segmentation masks, sketches, and poses.