An enormous amount of data is produced daily in our increasingly digitalized world, and the neural compression algorithms designed to deal with this deluge have typically relied on autoencoders with specialized encoder and decoder architectures for different data modalities, with a strong focus on image and video data.
A 2021 study by Dupont et al. proposed COIN (Compression with Implicit Neural Representations), a neural compression framework that bypasses specialized encoders and decoders. Instead of storing the RGB values for each pixel of an image, COIN stores the weights of a neural network overfitted to the image.
In the new paper COIN++: Data Agnostic Neural Compression, a research team from the University of Oxford builds on this idea with COIN++, a data-agnostic general neural compression framework that can seamlessly handle a wider range of modalities, from images to medical and climate data.
Despite the advantages of COIN, the Oxford team identifies a number of drawbacks in the framework: 1) encoding is slow; 2) lack of a shared structure; and 3) performance well below that of state of the art (SOTA) image codecs.
To address these issues, the proposed COIN++ framework uses meta-learning to reduce encoding time by more than two orders of magnitude, enabling it to encode in less than one second what COIN would require minutes or hours to complete. The team also trained a base network that encodes a shared structure and applied modulations to this network to encode instance-specific information. Finally, they boosted performance by quantizing and entropy coding the modulations, with the resulting COIN++ significantly exceeding COIN in terms of both compression and speed.
COIN++ stores a set of modulations applied to a shared base network and uses a latent vector linearly mapped to the modulations to further reduce storage, as it can store shared information in the base network and instance-specific information in the modulations.
The team found that modulations are surprisingly quantizable, so they used uniform quantization to quantify the modulations and shorten bitwidths, which improved compression by a factor of six with little cost in reconstruction quality. The team used a simple approach for modelling the distribution of the quantized codes — counting the frequency of each quantized modulation value and using this distribution for arithmetic coding — which reduced storage by 8-15 percent.
The team evaluated COIN++ on images, medical data and climate data, where it demonstrated its ability to handle a wide variety of modalities and significantly improve compression performance and encoding time compared to COIN. The team hopes their COIN++ framework can advance the progress of neural compression techniques and inspire additional research on how and where data can be stored as neural networks through COIN-like approaches.
The paper COIN++: Data Agnostic Neural Compression is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “Oxford U Proposes COIN++, a Neural Compression Framework for Different Data Modalities”