The Atari57 suite of classic video games is a popular benchmark used in the reinforcement learning (RL) community to test the general competency of RL algorithms. As Syncedreported in 2020, DeepMind researchers created the first deep RL agent — Agent57 — to achieve above-human performance on all 57 games. While Agent57’s performance was heralded as a breakthrough moment in RL, it came at the cost of very poor data efficiency: requiring nearly 80 billion frames of experience.
In the new paper Human-level Atari 200x Faster, a DeepMind research team applies a set of diverse strategies to Agent57, with their resulting MEME (Efficient Memory-based Exploration) agent surpassing the human baseline on all 57 Atari games in just 390 million frames — two orders of magnitude faster than Agent57.
The team summarizes their work’s main contributions as follows:
- Building off Agent57, we carefully examine bottlenecks that slow down learning and address instabilities that arise when these bottlenecks are removed.
- We propose a novel agent that we call MEME (Efficient Memory-based Exploration agent), which introduces solutions to enable taking advantage of three approaches that would otherwise lead to instabilities: training the value functions of the whole family of policies from Agent57 in parallel, on all policies’ transitions (instead of just the behaviour policy transitions), bootstrapping from the online network, and using high replay ratios.
- We explore several recent advances in deep learning and determine which of them are beneficial for non-stationary problems like the ones considered in this work.
- We examine approaches to robustify performance by introducing a policy distillation mechanism that learns a policy head based on the actions obtained from the value network without being sensitive to value magnitudes.
The DeepMind researchers’ goal was the development of an agent as general as Agent57 and capable of reaching human-level performance across the entire Atari57 game suite but with much higher sample efficiency. The paper details the novel techniques used to achieve this:
- An approximate trust region method for stable bootstrapping from the online network to enable faster propagation of learning signals for rare events
- A normalization scheme for the loss and priorities to improve the robustness of value function learning and stabilize learning under differing value scales
- Leveraging NFNets to advance model architecture without the need for normalization layers to improve the neural network architecture
- A policy distillation method to smooth out the instantaneous greedy policy over time and enable more robust updates under a rapidly-changing policy
In their empirical study, the researchers applied MEME on all 57 Atari games, where it handily surpassed all human baselines in just 390M frames, 200 times faster than Agent57.
The team notes that despite MEME’s success, there remains room for improvement with regard to its generality; and envisions applying MEME to additional challenges such as more complex observation spaces (e.g. 3D navigation, multi-modal inputs), complex action spaces, and longer-term credit assignment.
The paper Human-level Atari 200x Faster is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.