Tag: Image Generation

AI Machine Learning & Data Science Research

Precision in Pixels: NVIDIA’s Edify Image Model Combines High Quality with Unmatched Control

In a new paper Edify Image: High-Quality Image Generation with Pixel Space Laplacian Diffusion Models, an NVIDIA research team introduces Edify Image—a suite of pixel-based diffusion models that achieve high-resolution image synthesis with exceptional control and precision.

AI Machine Learning & Data Science Research

Meta’s Imagine Flash: Pioneering Ultra-Fast and High-Fidelity Images Generation Within 3 Steps

In a new paper Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation, a Meta GenAI research team introduces an innovative distillation framework aimed at enabling high-fidelity, diverse sample generation within just one to three steps. This framework surpasses existing competitors in both quantitative metrics and human evaluations.

AI Machine Learning & Data Science Research

OpenAI’s unCLIP Text-to-Image System Leverages Contrastive and Diffusion Models to Achieve SOTA Performance

In the new paper Hierarchical Text-Conditional Image Generation with CLIP Latents, an OpenAI research team combines the advantages of contrastive and diffusion models for text-conditional image generation tasks. Their proposed unCLIP model improves image diversity with minimal loss in photorealism and caption similarity, and produces image quality comparable to the state-of-the-art text-to-image system GLIDE.

AI Computer Vision & Graphics Machine Learning & Data Science Research

Google’s MaskGIT Outperforms SOTA Transformer Models on Conditional Image Generation and Accelerates Autoregressive Decoding by up to 64x

A Google Research team proposes Masked Generative Image Transformer (MaskGIT), a novel image synthesis paradigm that uses a bidirectional transformer decoder. MaskGIT significantly outperforms state-of-the-art transformer models on the ImageNet dataset and accelerates autoregressive decoding by up to 64x.