AI Machine Learning & Data Science Research Share My Research

Batchboost: Regularization for Stabilizing Training With Resistance to Underfitting & Overfitting

Batchboost is a simple technique to accelerate ML model training by adaptively feeding mini-batches with artificial samples which are created by mixing two examples from the previous step - in favor of pairing those that produce the difficult one.

Content provided by Maciej A. Czyzewski, the author of the paper Batchboost: Regularization for Stabilizing Training With Resistance to Underfitting & Overfitting.

Batchboost is a simple technique to accelerate ML model training by adaptively feeding mini-batches with artificial samples which are created by mixing two examples from the previous step – in favor of pairing those that produce the difficult one.

What’s New: In this research, we state the hypothesis that mixing many images can be more effective than just two. To make it efficient, we propose a new method of creating mini-batches, where each sample from the dataset is propagated with subsequent iterations with less and less importance until the end of the learning process.

How It Works: Batchboost pipeline has three stages:
(a) pairing: method of selecting two samples from the previous step.
(b) mixing: method of creating a new artificial example from two selected samples.
(c) feeding: constructing training mini-batch with created examples and new samples from the dataset (concat with ratio γ).
Note that the sample from dataset propagates with subsequent iterations with less and less importance until the end of the training.

Our baseline implements pairing stage as sorting by sample error, where hardest examples are paired with easiest ones. Mixing stage merges to samples using mixup, x1+(1−λ)x2. Feeding stage combines new samples with ratio 1:1 using concat.

Key Insights: The results are promising. Batchboost has 0.5-3% better accuracy than the current state-of-the-art mixup regularization on CIFAR-10 (#10 place in paperswithcode.com) & Fashion-MNIST. (we hope to see our method in action, for example, on Kaggle as a trick to improve a bit test accuracy)

Behind The Scenes: There is a lot to improve in data augmentation and regularization methods. An interesting topic for further research and discussion is a combination of batch boost and existing methods.

The paper Batchboost: Regularization for Stabilizing Training With Resistance to Underfitting & Overfitting is on arXiv. Visit their research at GitHub.


Meet the author Maciej A. Czyzewski from Poznan University of Technology.

pp_logo - Maciej A. Czyzewski.jpg

Share Your Research With Synced

Share My Research is Synced’s new column that welcomes scholars to share their own research breakthroughs with over 1.5M global AI enthusiasts. Beyond technological advances, Share My Research also calls for interesting stories behind the research and exciting research ideas. Share your research with us by clicking here.

1 comment on “Batchboost: Regularization for Stabilizing Training With Resistance to Underfitting & Overfitting

  1. Thank you very much.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: