AI Machine Learning & Data Science Research

DeepMind Introduces ‘Acme’ Research Framework for Distributed RL

DeepMind researchers introduce a framework that aims to solve the problem by enabling simple RL agent implementations to be run at different scales of execution.

In recent years reinforcement Learning (RL) programs have successfully trained agents to defeat human professionals in complex games, offered insights for solving drug design challenges, and much more. These exciting advances however often come with a dramatic growth in model scale and complexity, which has made it difficult for researchers to reproduce existing RL algorithms or rapidly prototype new ideas.

In the new paper Acme: A Research Framework for Distributed Reinforcement Learning, a team of DeepMind researchers introduce a framework that aims to solve the problem by enabling simple RL agent implementations to be run at different scales of execution.

image.png

RL enables autonomous agents to learn how to interact with an unknown environment by relying on assigned reward functions and negative rewards. Through its exploration of the environment, an agent gathers useful experiences from which it can learn to subsequently adjust and improve its performance. In online RL, both gathering environmental information and learning are handled simultaneously, and an enormous amount of interaction between the agent and the environment is required. In simulated environments and games, researchers obtain this massive experience in a distributed manner.

Offline RL meanwhile does not focus on learning policies represented as deep neural networks — learning instead on policies from a fixed dataset of experiences. In both settings, however, the widespread use of increasingly large-scale distributed systems in RL agent training is noteworthy.

The researchers suggest that — from a simple, single-process prototype of an algorithm to a full large-scale distributed system — re-implementation of the agent may be required to effectively improve reproducibility. The team explains they designed Acme to enable agents to run in both single-process and highly distributed regimes by providing tools and components for constructing agents at various levels of abstraction, from the lowest (e.g., networks, losses, policies) through to workers (actors, learners, replay buffers), and finally entire agents complete with the experimental apparatus necessary for robust measurement and evaluation, such as training loops, logging, and checkpointing.

image.png

The team describes Acme as a classical RL interface which connects actors with their environments. Actors can make observations and select actions that will be fed back into the environment accordingly and will then be used to update the actor’s internal state. The internal division of acting and learning from data also allows researchers to re-use the acting portion across many different agents.

image.png

Acme can enable reproducibility of methods and results, simplify the designing of new algorithms, and enhance the readability of RL agents. DeepMind says it released Acme to support scalable and fast iteration of research ideas in RL, and hope the research community can use the tool to explore RL agents at various levels of complexity, and leverage it as a reference implementation for existing RL algorithms and robust baselines.

The paper Acme: A new Framework for Distributed Reinforcement Learning is on arXiv, and Acme itself can be found on the project GitHub.


Journalist: Fangyu Cai | Editor: Michael Sarazen

1 comment on “DeepMind Introduces ‘Acme’ Research Framework for Distributed RL

  1. Pingback: [R] DeepMind Introduces ‘Acme’ Research Framework for Distributed RL – tensor.io

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: