Inspired by biological evolution processes, evolutionary computation employed at scale on CPU clusters has become one of the most effective approaches for training large neural networks. While the emergence in recent years of hardware accelerators such as GPUs and TPUs has advanced the state-of-the-art in deep learning (DL) training, hardware-accelerated computational methods for evolution have remained relatively underexplored.
To narrow this gap, a Google Brain research team has introduced EvoJAX, a JAX-based, scalable, general-purpose, hardware-accelerated neuroevolution toolkit that enables neuroevolution algorithms to work with neural networks running in parallel across multiple TPU/GPUs to achieve significant training speedups.
In their paper EvoJAX: Hardware-Accelerated Neuroevolution, the team details EvoJAX’s design and showcases extensible examples for a wide range of tasks to demonstrate how EvoJAX can shorten the experimental iteration cycle for researchers working with evolutionary computation.
EvoJAX was developed to improve neuroevolution training efficiency by implementing the entire pipeline in a modern ML framework that supports hardware acceleration. The team built EvoJAX on the JAX Python library due to its impressive auto-vectorization, device-parallelism and just-in-time compilation features and its broad hardware support.
The EvoJAX workflow comprises three major components – the neuroevolution algorithm, the policy and the task. While these are also found in existing approaches for neuroevolution implementation, there are key differences that make EvoJAX more efficient, related to modern ML optimizers, global policy, vectorized tasks and device parallelism.
For optimization, EvoJAX leverages JAX-based libraries to achieve significant speedups and provide user-friendly tools and interfaces that enable more efficient developments and implementations. EvoJAX builds a global policy that treats both task observations and policy parameters as data for the computational graph, an approach that is consistent with DL frameworks and enables hardware acceleration. The team also groups tasks in a vectorized form to complement EvoJAX’s global policy design. By leveraging JAX’s device-parallelism support, EvoJAX is capable of scaling its training procedure almost linearly to the available hardware accelerators. In addition to these key features, EvoJAX also comes with a trainer and a simulation manager that help orchestrate and manage the training process.
The paper includes six examples that showcase the capacity, efficiency and the usage of EvoJAX online on supervised learning tasks (MNIST Classification, Seq2Seq Learning), control tasks (Robotic Control and Cart-Pole Swing Up) and novel tasks (WaterWorld and Concrete or Abstract Painting).
From their observations on these tasks, the team summarizes EvoJAX’s benefits as:
- On modest hardware accelerators, EvoJAX brings training speedups that are 10~20 times faster than the baseline, which leads to quicker idea iterations.
- The capability of training multi-agents in a complex setting that is beyond human design supplies training environmental richness.
- EvoJAX puts the entire pipeline on unified hardware setups, enabling practitioners to simplify complex hardware arrangements.
The work demonstrates EvoJAX’s ability to find solutions to tasks within minutes on a single accelerator, compared to hours or days when using CPUs. The team designed EvoJAX to provide researchers with an infrastructure that enables fast idea iterations and can help them devise more effective neuroevolution algorithms, explore novel policy architectures, and experiment with new tasks. They plan to release additional neuroevolution algorithm implementations for EvoJAX and add more policies and tasks to encourage its adoption as a useful toolkit for researchers in this growing field.
EvoJAX is available on the project GitHub. The paper EvoJAX: Hardware-Accelerated Neuroevolution is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
0 comments on “Google Brain’s EvoJAX Hardware-Accelerated Toolkit Significantly Improves Neuroevolutionary Computation”