AI Machine Learning & Data Science Research

Autonomous Learning Library Simplifies Intelligent Agent Creation

The Autonomous Learning Library is a deep reinforcement learning (DRL) library for PyTorch that streamlines the building and evaluation of novel reinforcement learning agents.

Watching today’s human-destroying intelligent agents playing complex video games can be fun — but creating one is a different story. Building an effective intelligent agent requires setting a mass of hyperparameters to shape the environment, establish the rewards, and so on. A group of researchers from the University of Massachusetts Amherst have attempted to simplify the process with their new Autonomous Learning Library project.

The Autonomous Learning Library is a deep reinforcement learning (DRL) library for PyTorch that streamlines the building and evaluation of novel reinforcement learning agents. One of the stated core philosophies of the initiative is that the reinforcement learning (RL) should be agent-based, meaning the models simply accept a state and a reward and then return an action.

image.png
Canonical agent-environment feedback loop

The Autonomous Learning Library separates the control loop from the agent logic to simplify both agent implementation and the control loop itself, increasing flexibility in the way agents can be used. In this case, the project allows an agent’s action to be determined by the control loop, enabling the agent interface and implementation to be extremely concise.

image.png
image.png
Autonomous learning library agent interface
image.png
DQN implementation in the Autonomous Learning Library

The Autonomous Learning Library divides RL agents into two distinct modules: “all.agents” and “all.presets”. The “all.agents” module contains implementations for common algorithms such as Rainbow, A2C, Vanilla, etc.; while “all.presets” provides specific examples of these agents adjusted under particular environments such as Atari games, classic control tasks, and so on.

image.png
Benchmark results for RL agents in Atari game environments

The project also highlights the function approximation module as one of its central abstractions. By building agents that rely on the approximation abstraction rather than directly interfacing with the PyTorch Module and Optimizer objects, users can add to or modify the functionality of an agent without altering its source code (known as the “Open-Closed Principle”). This enables the agent implementation to focus on defining the RL algorithm by itself.

image.png

The researchers also made a sample implementation to demonstrate the utility of the Autonomous Learning Library in developing new agents not included in the original library. Although the results do not make the agent look particularly smart, they do prove the practicability of the library.

image.png
Result of a sample demonstration using the Autonomous Learning Library to build new RL agents.

The Autonomous Learning Library project has been shared by Christopher Nota, a PhD student in Reinforcement Learning at the University of Massachusetts Amherst. Additional information is available on the project Github.


Author: Victor Lu | Editor: Michael Sarazen

0 comments on “Autonomous Learning Library Simplifies Intelligent Agent Creation

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: