Neural network training requires careful and time-consuming hyperparameter tuning. One promising way to automate this process is the recently proposed Population Based Training (PBT) approach. The PBT decision mechanisms however are not ideal, as they tend to favour short-term improvements, which can result in poor long-term model performance.
To address this issue, a DeepMind research team has proposed Faster Improvement Rate PBT (FIRE PBT), a novel approach that outperforms PBT methods and matches the performance of networks trained with traditional manual hyperparameter tuning on the ImageNet benchmark.

In PBT, a population of workers simultaneously trains their respective neural networks with their own hyperparameters. At regular intervals during the process, each worker will compare its evaluation (“fitness”) with the rest of the population. If a worker’s fitness is lower than that of its peers, it will undergo an exploit-and-explore process — discarding its own state and copying the neural network weights and hyperparameters of better-performing peers in the exploit step, and mutating the copied hyperparameters in the explore step.

Unlike previous sequential hyperparameter optimization methods, PBT leverages parallel training to speedup the training process. The hyperparameters are thus optimized concurrently with the training of the neural network, which tends to yield better performance.

A drawback with PBT is that it is a greedy process that favours short-term rewards, which may lead to reduced performance later in training. To address this, the proposed FIRE PBT divides the workers into two main groups: population members and evaluators. The population members are further divided into multiple disjoint sub-populations (p1,p2,…pn). The population members internally run regular PBT, where the sub-population p1 is greedy, while all other sub-populations are parent sub-populations and behave differently. In this way, the worker population is encouraged to produce neural network weights with high fitness values when trained with the hyperparameters used in their child sub-population.
In an empirical evaluation, the team compared FIRE PBT to PBT and random hyperparameter search (RS) on an image classification task and a reinforcement learning (RL) task.

In the image classification task, FIRE PBT significantly outperformed PBT and achieved results comparable to RS with a hand-tuned schedule. The researchers also observed that FIRE PBT rapidly reached high accuracy without compromising long-term performance.

In the reinforcement learning task, FIRE PBT showed faster learning and higher performance than both PBT and RS.
The study demonstrates that FIRE PBT can find sensible hyperparameter schedules that match the performance of hand-tuned schedules and outperform static schedules, validating the proposed approach as an effective method with faster improvement rates and better long-term performance.
The paper Faster Improvement Rate Population Based Training is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: r/artificial - [R] DeepMind's FIRE PBT: Automated Hyperparameter Tuning With Faster Model Training and Better Final Performance - Cyber Bharat