Facebook Proposes Free-Viewpoint Rendering on Monocular Video
Facebook’s new model enables free-viewpoint rendering of dynamic scenes in a single video.
AI Technology & Industry Review
Facebook’s new model enables free-viewpoint rendering of dynamic scenes in a single video.
Chinese researchers propose a novel regression framework in pursuit of “fast, accurate and stable 3D dense face alignment simultaneously.”
Google researchers have introduced a series of extensions to the SOTA view-synthesis method Neural Radiance Fields (NeRF) that enable it to produce high-quality 3D representations of complex scenes with only unstructured image collections as input.
Google’s Head of AI & IoT Strategic Business Development EMEA, has been appointed Chief Business Officer of Outsight.
A team of Google researchers recently proposed a novel “complete and label” domain adaptation approach.
A team of researchers proposed a novel three-stage learning framework that includes scene context to generate long-term 3D human motion prediction when given a single scene image and 2D pose histories.
Deep Fashion3D contains 2,078 3D garment models reconstructed from real-world garments in 10 different clothing categories.
Their proposed framework outperforms state-of-the-art approaches for 3D reconstructions from 2D and 2.5D data, achieving 12 percent better performance on average in the ShapeNet benchmark dataset and up to 19 percent for certain classes of objects.
In a bid to simplify 3D deep learning and improve processing performance and efficiency, Facebook recently introduced an open-source framework for 3D computer vision.
Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up.
A new Adobe-developed AI tool significantly lowers the threshold for producing dynamic images with a framework that synthesizes a “3D Ken Burns effect” from a single image.
A team of researchers from the Chinese gaming giant NetEase have developed a method to automatically create players’ in-game characters from a standard portrait photo.
Google researchers have introduced a new face detection framework called BlazeFace, adapted from the Single Shot Multibox Detector (SSD) framework and optimized for inference on mobile GPUs.
In a new paper accepted at CVPR 2019, researchers from the Max Planck Institute for Intelligent Systems introduce RingNet, an end-to-end trainable network which learns to compute 3D face shape from a single face image without 3D supervision.
Interactive movies are redefining cinema and storytelling and opening up a world of possibilities in the entertainment industry. There are no “spoilers” for films with no predetermined endings, whose characters and plots develop based on viewers’ real-time direction. Now, what if these viewers became characters?