The Atari57 suite of classic video games is a popular benchmark used in the reinforcement learning (RL) community to test the general competency of RL algorithms. As Syncedreported in 2020, DeepMind researchers created the first deep RL agent — Agent57 — to achieve above-human performance on all 57 games. While Agent57’s performance was heralded as a breakthrough moment in RL, it came at the cost of very poor data efficiency: requiring nearly 80 billion frames of experience.
In the new paper Human-level Atari 200x Faster, a DeepMind research team applies a set of diverse strategies to Agent57, with their resulting MEME (Efficient Memory-based Exploration) agent surpassing the human baseline on all 57 Atari games in just 390 million frames — two orders of magnitude faster than Agent57.

The team summarizes their work’s main contributions as follows:
- Building off Agent57, we carefully examine bottlenecks that slow down learning and address instabilities that arise when these bottlenecks are removed.
- We propose a novel agent that we call MEME (Efficient Memory-based Exploration agent), which introduces solutions to enable taking advantage of three approaches that would otherwise lead to instabilities: training the value functions of the whole family of policies from Agent57 in parallel, on all policies’ transitions (instead of just the behaviour policy transitions), bootstrapping from the online network, and using high replay ratios.
- We explore several recent advances in deep learning and determine which of them are beneficial for non-stationary problems like the ones considered in this work.
- We examine approaches to robustify performance by introducing a policy distillation mechanism that learns a policy head based on the actions obtained from the value network without being sensitive to value magnitudes.

The DeepMind researchers’ goal was the development of an agent as general as Agent57 and capable of reaching human-level performance across the entire Atari57 game suite but with much higher sample efficiency. The paper details the novel techniques used to achieve this:
- An approximate trust region method for stable bootstrapping from the online network to enable faster propagation of learning signals for rare events
- A normalization scheme for the loss and priorities to improve the robustness of value function learning and stabilize learning under differing value scales
- Leveraging NFNets to advance model architecture without the need for normalization layers to improve the neural network architecture
- A policy distillation method to smooth out the instantaneous greedy policy over time and enable more robust updates under a rapidly-changing policy

In their empirical study, the researchers applied MEME on all 57 Atari games, where it handily surpassed all human baselines in just 390M frames, 200 times faster than Agent57.
The team notes that despite MEME’s success, there remains room for improvement with regard to its generality; and envisions applying MEME to additional challenges such as more complex observation spaces (e.g. 3D navigation, multi-modal inputs), complex action spaces, and longer-term credit assignment.
The paper Human-level Atari 200x Faster is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: Perceptron: Çok dilli, gülen, Pitfall oynayan ve sokak odaklı yapay zeka - Dünyadan Güncel Teknoloji Haberleri | Teknomers
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - WaYs-2-rOcK
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - Perfect Hok
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI - WeirdReads
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - Caminoinn
Pingback: Perceptron: multilingual, laughing, trick-playing and cunning artificial intelligence - Crack Story
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI - Germannewz
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI | Small Business Minder
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch – Diganta Bangla News
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch | CRISPNEWS24
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - Electronics Technology
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - soyho
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch Lexicopedia
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI – TechNews
Pingback: Perceptron: IA multilingüe, risueña, engañosa e ingeniosa - Inteligencia artificial
Pingback: Multilingual, Laughing, Pitfall, and Streetwise AI • TechCrunch – World Breaking Press
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch – NINEJAGIDI.COM
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI - Timenewz
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI – TechCrunch – ID Hub For Technology
Pingback: Perceptron: Multilingual, Laughing, Pitfall-playing And Streetwise AI - UK Prime News
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch – Ranzware Tech NEWS
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI - MobileNewspepar
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI - Bhatti News Online
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI – Techology
Pingback: Multilingual, laughing, Pitfall-playing and streetwise AI • TechCrunch - endtasks
Pingback: Perceptron: Multilingual, laughing, Pitfall-playing and streetwise AI – TechCrunch – Finahost Online Solutions