Over the past five years, deep learning (DL) frameworks such as TensorFlow and PyTorch have been introduced to enable developers to quickly prototype new machine learning models. Most such frameworks however still struggle with the implementation of sequential decision methods, especially with regard to reinforcement learning (RL) algorithms.
To fill this gap, a Facebook AI research team has released SaLinA, a library that can be used for RL in multiple settings (model-based RL with differentiable environments, multi-agent RL, etc.) and greatly simplifies the implementation of complex sequential learning models.
The team explains that because traditional RL frameworks tend to define new and complex abstractions, they have a high adoption cost, low flexibility, and are difficult to use for researchers outside the RL domain. Moreover, they are usually intended for specific RL cases and cannot handle all RL settings.
The proposed SaLinA was designed to address these issues and make the implementation of sequential decision processes (including RL methods) as simple and natural as implementing neural network architectures.
SaLinA is built on two fundamental principles: 1) All modules/agents exchange information through a workspace, and 2) Everything is an agent.
Specifically, SaLinA defines a “salina.Workspace” object to enable tensor organization, where the workspace is a tensor dictionary-like component used to store complex temporal traces that can be generated by the model or loaded from a dataset. Given a workspace, the first principle defines agents that will exchange information through a workspace. The second principle ensures that understanding the SaLinA workflow only requires understanding how to manipulate a workspace and how to define an agent.
The researchers say that based on these two principles, SaLinA can enable even developers without extensive RL experience to build any sequential learning model and execute it on multiple CPUs and GPUs.
The team summarizes the SaLinA library’s advantages as:
- Simplicity: Understanding the Agent and Workspace API is enough to understand SaLinA and to implement complex sequential decision models. There are no hidden mechanisms, and the two classes are very simple and familiar to any PyTorch user.
- Modularity: SaLinA allows one to build complex agents by combining simpler ones using pre-defined container agents.
- Flexibility: SaLinA provides additional tools to facilitate the implementation of complex models. SaLinA comes with wrappers capturing OpenAI Gym environments as agents, DataLoader as agents, and Brax environments as agents, allowing one to quickly develop a large variety of models. Moreover, with the ability to replay agents on a workspace, there is no need to have particular replay buffer implementations, and it is very easy to do batch RL by reading a workspace directly from a dataset.
- Scaling: SaLinA provides an NRemoteAgent wrapper that can execute any agent over multiple processes, speeding-up the computation of any particular agent. Used in addition to the possibility of having agents on CPU or GPU, it makes the library able to scale to very large problems, with only a few modifications to the code.
Overall, the SaLinA library offers a novel way to implement sequential decision-making algorithms, enabling developers to prototype new algorithms and easily test new ideas without sacrificing training and testing speed. The researchers suggest that while SaLinA will benefit RL practitioners, it could also help computer vision researchers who want to add a sequential decision dimension to their methods and natural language processing researchers seeking a natural way to model dialogue.
The paper SaLinA: Sequential Learning of Agents is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.