Launched at CVPR 12 years ago, the ImageNet database now comprises more than 14 million labelled images and has become one of the most influential benchmarks in the field of computer vision. While researchers continually strive to boost model accuracy on ImageNet, relatively little effort has been made to improve resource efficiency in ImageNet supervised learning. Also, ImageNet is a static dataset, while real-world data often comes in a stream and at a much larger scale.
In the new paper One Pass ImageNet, a DeepMind research team presents the One Pass ImageNet (OPIN) problem, designed to study the space and compute efficiency of deep learning in a streaming setting with constrained data storage, with the goal of developing systems that train a model with each example passed to the system only once.

The OPIN problem assumes that inputs are sent in mini-batches and will not repeat. After the entire dataset is revealed, the training procedure ends. Unlike standard ImageNet benchmarks that focus on model accuracy, OPIN considers learning capability under constrained space and computation settings. The researchers use three major metrics: 1) Accuracy, represented by the top-1 accuracy in the test set, 2) Space, represented by total additional data storage needed, and 3) Compute, represented by the total number of global steps for backpropagation.
In the paper, the team identifies four properties of the OPIN problem:
- The cold-start problem: Model start from random initialization. So the representation learning becomes challenging in OPIN especially during the early stage of the training.
- The forgetting problem: Each example is passed to the model only once. Even though the data is i.i.d. (independent and identically distributed), vanilla supervised learning is likely to incur forgetting of early examples.
- A natural ordering of data: No artificial order of the data is enforced. So the data can be seen as i.i.d., which is different from many existing continual learning benchmarks.
- Multiple objectives: The methods are evaluated using three metrics (accuracy, space and compute), so the goal is to improve all three metrics in a single training method.
In their evaluations, the team used a common “multi-epoch” ImageNet solution, ResNet-50 (a residual neural network with 50 layers) as their model, and performed experiments with the replay steps being 1, 3, 5, 8 and the size of replay buffer being 1 percent, 5 percent and 10 percent of the dataset.

The team summarizes their experimental results as:
- Prioritized replay with 10% memory size achieves performance very close to the multi-epoch training method under the same computational cost. And the multi-epoch method utilizes the full dataset which requires a large data storage. It is unknown whether multi-epoch gives a performance upper bound, we believe it is a strong target performance to reference.
- 1% data storage gives a strong starting point for prioritized replay. Having a 1% data storage (equivalent to 100 mini-batches) dramatically improves the naive One-Pass performance by 28.7%. According to the table, the accuracy increase from 1 to 10% storage is 1.0% for 2 epochs, 1.7% for 4 epochs, 3.3% for 6 epochs and 5.7% for 9 epochs.
- When the buffer size becomes bigger, the accuracy gains more when the number of replay steps is more. From 5% size to 10% size, the model accuracy increases by 0.6% and 0.1% respectively for replay step 1 and 3, while the accuracy increases by 0.9% for replay step 5. The model accuracy saturates quickly if one only increases either storage size or the replay steps. Increasing both of them could potentially incur a much bigger accuracy boost.
The team hopes their work will inspire researchers to focus on improving resource efficiency in supervised learning, which can help to promote the further development and industrialization of deep learning.
The paper One Pass ImageNet is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “DeepMind’s One Pass ImageNet: A New Benchmark for Resource Efficiency in Deep Learning”