Computer Vision & Graphics Popular

Nvidia Releases ‘Imaginaire’ Library for Image and Video Synthesis

Imaginaire, a universal PyTorch library designed for various GAN-based tasks and methods.

Generative adversarial networks (GANs) are able to discover and learn regularities and patterns in input data and use that information to generate realistic examples across a range of domains — most notably in image-to-image translation tasks, which can include for example changing photos of summer scenes to winter or day to night, and generating photorealistic images of objects, scenes and people.

Researchers from chip giant Nvidia this week delivered Imaginaire, a universal PyTorch library designed for various GAN-based tasks and methods. Imaginaire comprises optimized implementations of several Nvidia image and video synthesis methods, and the company says the library is easy to install, follow, and develop.

Imaginaire.png

The Imaginaire library currently covers supervised image-to-image translation models, unsupervised image-to-image translation models, and video-to-video translation models. The library package provides a tutorial for each model.

Supervised image-to-image translation models include pix2pixHD, which is able to learn a mapping that converts a semantic image to a high-resolution photorealistic image, and SPADE, which uses a simple but effective layer to synthesize photorealistic images given an input semantic layout and improves pix2pixHD performance on handling diverse input labels and delivering better output quality.

Unsupervised image-to-image translation models include UNIT (Unsupervised Image-to-Image Translation) for one-to-one mapping between two visual domains, MUNIT for many-to-many mapping between two visual domains, FUNIT, a style-guided image translation model that can generate translations in unseen domains, and COCO-FUNIT, an improved version of FUNIT with a content-conditioned style encoding scheme for style code computation.

For video-to-video translation models, Imaginaire currently covers vid2vid for high-resolution photorealistic video-to-video translation, fs-vid2vid for few-shot photorealistic video-to-video translation, and wc-vid2vid — an improved version of vid2vid on view consistency and long-term consistency.

Imaginaire is released under the NVIDIA Software license, consultations will be required for commercial use.

The Imaginaire library is on GitHub.


Reporter: Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

2 comments on “Nvidia Releases ‘Imaginaire’ Library for Image and Video Synthesis

  1. Pingback: Nvidia Releases ‘Imaginaire’ Library for Image and Video Synthesis - GistTree

  2. Pingback: [R] Nvidia Releases ‘Imaginaire’ Library for Image and Video Synthesis – tensor.io

Leave a Reply

Your email address will not be published.

%d bloggers like this: