AI Machine Learning & Data Science Research

Alibaba’s VQRF Realizes a 100x Compression Rate, Reducing Volumetric Radiance Files to 1 MB

In the new paper Compressing Volumetric Radiance Fields to 1 MB, an Alibaba Group research team proposes vector quantized radiance fields (VQRF), a simple yet efficient framework for compressing volumetric radiance fields that achieves up to 100x storage reduction, reducing original grid model size to around 1 MB with negligible loss on rendering quality.

AI-powered image synthesis has achieved results that would have been unimaginable just a few years ago — and there is much more to come. Introduced in a 2020 ECCV paper, Neural radiance fields (NeRF) use deep neural networks to model and render entire 3D scenes based on 2D images — a technique that will supercharge virtual and augmented reality applications. Current NeRF rendering processes via volumetric grids that store information on voxels however incur a huge computational overhead, costing hundreds of megabytes of memory for a single scene.

In the new paper Compressing Volumetric Radiance Fields to 1 MB, an Alibaba Group research team proposes vector quantized radiance fields (VQRF), a simple yet efficient framework for compressing volumetric radiance fields that achieves up to 100x storage reduction, reducing original grid model size to around 1 MB with negligible loss on rendering quality.

The researchers’ novel approach is based on their observation that only 10 percent of voxels typically contribute over 99 percent of importance scores in a grid model — indicating a significant redundancy that can be targeted to improve model efficiency.

The proposed VQRF pipeline comprises three steps: 1) Voxel pruning filters out the voxels that contribute least to overall rendering quality, 2) Vector quantization is an optimization strategy that further reduces model size by encoding important voxel features into a compact codebook, and 3) Post processing, in which a simple uniform weight quantization on the density voxel and non-vector-quantized feature voxels is used to obtain a model with small storage cost.

An attractive feature of the proposed pipeline is that its pruning strategy can generalize across different scenarios or methods thanks to a quantile function that adaptively selects the voxel pruning threshold (omitting invaluable voxels below the threshold).

The team’s empirical study compared the proposed VQRF with the original NeRF, uncompressed volumetric radiance fields and other methods. In the evaluations, VQRQ achieved compression ratios of up to 100x compared to original grid models, reducing model size to ~1 MB without degrading visual quality.

Overall, this work validates the effectiveness and generalization ability of the proposed VQRF framework, establishing it as a promising approach for reducing the high costs associated with NeRFs and volumetric radiance fields.

The code is available on the project’s GitHub. The paper Compressing Volumetric Radiance Fields to 1 MB is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

0 comments on “Alibaba’s VQRF Realizes a 100x Compression Rate, Reducing Volumetric Radiance Files to 1 MB

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: