AI Technology

Reimagining the Dog: New DeepMind Models and Tutorial for Physics-Based RL Tasks

DeepMind researchers released several new models and a tutorial for their dm_control software stack for physics-based simulation and reinforcement learning environments using MuJoCo physics.

DeepMind researchers this week released several new models and a tutorial for their dm_control software stack for physics-based simulation and reinforcement learning (RL) environments using MuJoCo physics.

The dm_control toolkit comprises Python libraries and task suites for RL agents in an articulated-body simulation. It has been around for a couple of years, initially designed by DeepMind researchers and engineers to facilitate their own continuous control and robotics needs. Dm_control has been applied extensively across the UK-based AI company’s projects, serving as a fundamental component of continuous control research.

The dm_control package is open-sourced on GitHub, where it has received nearly 2,000 stars. An introductory tutorial for the package is also available as a Colaboratory notebook.

A DeepMind dm_control blog post explains that the package also includes a MuJoCo wrapper to provides convenient bindings to functions and data structures, PyMJCF and Composer libraries that enable procedural model manipulation and task authoring, and a Control Suite.

The researchers say MuJoCo’s support of names for all model elements enables strings to index and slice into arrays, which leads to a much more robust, readable codebase. The PyMJCF library creates a Python object hierarchy with 1:1 correspondence to a MuJoCo model. Composer can be considered as the “game engine” framework, which defines a particular order of runtime function calls and abstracts the affordances of reward, termination and observation.

dn-624.png
PyMJCF’s colourful and dynamic virtual “creatures”

The DeepMind Control Suite is a set of continuous control tasks with a standardized structure and rewards intended to serve as performance benchmarks for RL agents. The researchers also added a delightful new dog environment — although the dogs now been temporarily removed due to a VFS bug. At least they doesn’t have fleas!

Also in the update are a set of configurable manipulation tasks with a robot arm and snap-together bricks as well as several locomotion tasks with scenarios such as soccer playing. The researchers have also made a locomotion framework available, which provides high-level abstractions and examples of locomotion tasks.

The paper Dm_control: Software and Tasks for Continuous Control is on arXiv.


Journalist: Yuan Yuan | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Advertisements

1 comment on “Reimagining the Dog: New DeepMind Models and Tutorial for Physics-Based RL Tasks

  1. Pingback: [R] Reimagining the Dog: New DeepMind Models and Tutorial for Physics-Based RL Tasks – tensor.io

Leave a Reply

Your email address will not be published.

%d bloggers like this: