Compared to nocturnal animals that must hunt and forage in the dark to survive, humans have developed relatively weak night vision abilities, limiting our perception and environmental understanding under conditions such as moonless nights. While photographers can use long exposure times to collect sufficient light to represent static objects in low-light conditions, accurately capturing moving objects remains challenging due to the accumulation of camera noise that overwhelms and obscures the images.
In the new paper Dancing Under the Stars: Video Denoising in Starlight, a research team from UC Berkeley and Intel Labs leverages a GAN-tuned, physics-based noise model to represent camera noise under low light conditions and trains a novel denoiser that, for the first time, achieves photorealistic video denoising in starlight.
A sunny day will have an illumination level of about 100 kilolux, while moonlight will produce only about 1 lux. The researchers set their sights several orders of magnitude lower, aiming for video denoising at the sub-millilux (starlight only) level. They achieve this via: 1) A high-quality CMOS camera optimized for low-light imaging and set to the highest gain setting, 2) Learning the camera’s noise model using a physics-inspired noise generator and easy-to-obtain still noisy images from the camera, and 3) Using the generated synthetic clean/noisy video pairs from noise model to train their video denoiser.
Unlike current deep learning-based approaches that train with a large number of image pairs to obtain good denoising performance in low light scenarios, the team’s proposed physics-based noise generator is trained on a limited dataset of clean/noisy image bursts and does not require extra experimental motion-aligned clean/noisy video clips. This approach greatly reduces training costs while yielding competitive performance. To force their noise generator to produce different noise samples at each forward pass, the team employs a GAN-based adversarial setup in which a discriminator evaluates the synthesized noisy images for realism.
In their empirical studies, the team compared their noise generator against noise model baselines for low-light imaging; and compared their noise generator and video denoiser pipeline with several existing denoising schemes. The results show that the proposed method significantly outperforms all baselines, demonstrating photorealistic video denoising in starlight.
Overall, this work showcases the potential of deep-learning-based denoising under extremely low-light conditions. The team hopes their efforts can lead to future scientific discoveries in this computer vision research area and help advance robot vision performance in extremely dark settings.
The paper Dancing Under the Stars: Video Denoising in Starlight is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.