Love-Love: Stanford Researchers Generate Realistic ‘Fake’ Wimbledon
A Stanford University research team has responded with an AI-powered model capable of realistically simulating a Wimbledon final and more.
AI Technology & Industry Review
A Stanford University research team has responded with an AI-powered model capable of realistically simulating a Wimbledon final and more.
ACM SIGGRAPH has honoured MIT CSAIL postdoctoral researcher Li Tzu-Mao with its 2020 Doctoral Dissertation Award for his PhD thesis Differentiable Visual Computing.
In the seminal 1996 paper Light Field Rendering, Levoy and Hanrahan describe a representation for light fields that allows for both efficient creation and display.
A team of researchers from CMU and Technion recently introduced a new system, Penrose, that can turn complex mathematical notations into various styles of simple diagrams.
The new benchmark for wide-baseline image matching includes a 30k image dataset with depth maps and accurate pose information.
The Association for Computing Machinery (ACM) this morning announced Patrick M. (Pat) Hanrahan and Edwin E. (Ed) Catmull as its 2019 Turing Award winners.
Researchers have proposed a new image generative model that leverages the hierarchical space of deep features learned by pretrained classification networks and provides a unified and versatile framework for image generation and manipulation tasks.
Proposed by researchers from the Rutgers University and Samsung AI Center in the UK, CookGAN uses an attention-based ingredients-image association model to condition a generative neural network tasked with synthesizing meal images.
The proposed system is capable of searching the continental United States at 1 -meter pixel resolution, corresponding to approximately 2 billion images, in around 0.1 seconds.
Researchers have proposed a novel generator network specialized on the illustrations in children’s books.
Researchers from Beijing’s National Laboratory of Pattern Recognition (NLPR), SenseTime Research, and Nanyang Technological University have taken the tech one step further with a new framework that enables totally arbitrary audio-video translation.
A new study from Peking University and Microsoft Research Asia proposes a novel two-phase framework, FaceShifter, that aims for high-fidelity and occlusion-aware face exchange.
Do you have two left feet? Do you avoid the dance floor out of fear of embarrassment? If you’ve ever secretly wished you could move your body like Joaquín Cortés — well, at least in a video — a new AI-powered 3D body mesh recovery module called Liquid Warping GAN can give you a leg up.
A new Adobe-developed AI tool significantly lowers the threshold for producing dynamic images with a framework that synthesizes a “3D Ken Burns effect” from a single image.
Google’s deep learning TensorFlow platform has added Differentiable Graphics Layers with TensorFlow Graphics, a combination of computer graphics and computer vision. Google says TensorFlow Graphics can solve data labeling challenges for complex 3D vision tasks by leveraging a self-supervised training approach.