AI Machine Learning & Data Science Research

DeepMind’s Meta-Learning Sparse Compression Networks Set New SOTA on Diverse Modality Data Compression

In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for scaling implicit neural representations (INRs). The resulting meta-learning sparse compression networks can represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes, and scenes, achieving state-of-the-art results on some of them.

Data representation is a core aspect of deep learning. While researchers have typically represented images or 3D shapes as multi-dimensional arrays, the use of implicit neural representations (INRs) has emerged as an attractive alternative. Recent work with INRs has shown that they can outperform established compression methods such as JPEG.

In the new paper Meta-Learning Sparse Compression Networks, a DeepMind research team proposes steps for making INRs more scalable. Their resulting meta-learning sparse compression networks (MSCN) are able to effectively represent diverse data modalities such as images, manifolds, signed distance functions, 3D shapes and scenes; and achieve state-of-the-art results on some of them.

The researchers note that despite INRs’ impressive compression performance, they have limitations such as the trade-off between network size and approximation quality requiring architecture search or strong inductive biases. The cost of fitting neural networks to data using INRs is also significantly higher than traditional compression methods such as 1992’s JPEG.

The proposed MSCNs are designed to address the abovementioned issue. The team adopts new state-of-the-art sparsity techniques to explicitly optimize INRs to use as few parameters as possible to greatly reduce computational cost; and applies meta-learning techniques such as MAML to enable learning INRs representing a single signal via finetuning from a learned initialization using fewer optimization steps.

The sparsity procedure’s efficient backpropagation and specified optimized initialization for sparse signals enable the team to re-imagine the sparsity procedure as finding the most suitable network structure for a given task. The approach is also flexible enough to allow for weight, representation, group and gradient sparsity with minimal changes, making it a good candidate for a wide variety of applications.

The team evaluated their MSCN framework in experiments on datasets and modalities ranging from images to manifolds, voxels, signed distance functions and scenes, comparing it with five baselines: MAML+OneShot, MAML+IMP, Dense-Narrow, and Scratch.

The proposed MSCN achieved competitive results in the experiments, even surpassing some state-of-the-art techniques.

The team says their framework could be particularly suitable in meta-learning applications where fast inference is required, and notes that the ideas introduced in their paper can also be easily combined with some of the baselines. For future work, the team believes a latent code approach could be competitive if only a sparse subset of modulations need to be reconstructed.

The paper Meta-Learning Sparse Compression Networks is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “DeepMind’s Meta-Learning Sparse Compression Networks Set New SOTA on Diverse Modality Data Compression

  1. Pingback: ▷ En las últimas investigaciones sobre IA, los investigadores de Deepmind proponen pasos para escalar las representaciones neuronales implícitas (INR)

Leave a Reply

Your email address will not be published. Required fields are marked *