Tag: Reinforcement Learning

AI Machine Learning & Data Science Research

DeepMind’s MEME Agent Achieves Human-level Atari Game Performance 200x Faster Than Agent57

In the new paper Human-level Atari 200x Faster, a DeepMind research team applies a set of diverse strategies to Agent57, with their resulting MEME (Efficient Memory-based Exploration) agent surpassing the human baseline on all 57 Atari games in just 390 million frames — two orders of magnitude faster than Agent57.

AI Machine Learning & Data Science Research

DeepMind’s ‘Expert-Aware’ Data Augmentation Technique Enables Data-Efficient Learning from Parametric Experts

The new DeepMind paper Data Augmentation for Efficient Learning from Parametric Experts proposes Augmented Policy Cloning (APC), a simple yet effective data-augmentation approach designed to support data-efficient learning from parametric experts. The method significantly improves data efficiency across various control and reinforcement learning settings.

AI Machine Learning & Data Science Research

DeepMind’s Model-Based Offline Options Framework Supports Automatic Skill & Behaviour Discovery, Boosts Transfer Capabilities

In the new paper MO2: Model-Based Offline Options, a DeepMind research team introduces Model-Based Offline Options (MO2), an offline hindsight bottleneck options framework that supports sample-efficient option discovery over continuous state-action spaces for efficient skill transfer to new tasks.

AI Machine Learning & Data Science Research

DeepMind & UCL’s Stochastic MuZero Achieves SOTA Results in Complex Stochastic Environments

In the new paper Planning in Stochastic Environments with a Learned Model, a research team from DeepMind and University College London extends the deterministic MuZero model to Stochastic MuZero for stochastic model learning, achieving performance comparable or superior to state-of-the-art methods in complex single- and multi-agent environments.

AI Machine Learning & Data Science Research

Learning Without Simulations? UC Berkeley’s DayDreamer Establishes a Strong Baseline for Real-World Robotic Training

In the new paper DayDreamer: World Models for Physical Robot Learning, researchers from the University of California, Berkeley leverage recent advances in the Dreamer world model to enable online reinforcement learning for robot training without simulators or demonstrations, establishing a strong baseline for efficient real-world robotic learning.

AI Machine Learning & Data Science Research

DeepMind Boosts RL Agents’ Retrieval Capability to Tens of Millions of Pieces of Information

In the new paper Large-Scale Retrieval for Reinforcement Learning, a DeepMind research team dramatically expands the information accessible to reinforcement learning (RL) agents, enabling them to attend to tens of millions of information pieces, incorporate new information without retraining, and learn decision making in an end-to-end manner.

AI Machine Learning & Data Science Research

NVIDIA & UW Introduce Factory: A Set of Physics Simulation Methods and Learning Tools for Contact-Rich Robotic Assembly

In the new paper Factory: Fast Contact for Robotic Assembly, a research team from NVIDIA and the University of Washington introduces Factory, a set of physics simulation methods and robot learning tools for simulating contact-rich interactions in assembly with high accuracy, efficiency, and robustness.

AI Machine Learning & Data Science Research

Huawei Rethinks Logical Synthesis, Proposing a Practical RL-based Approach That Achieves High Efficiency

In the new paper Rethinking Reinforcement Learning Based Logic Synthesis, a research team from Huawei Noah’s Ark Lab develops a novel reinforcement learning-based logic synthesis method to automatically recognize critical operators and produce common operator sequences that are generalizable to unseen circuits.

AI Machine Learning & Data Science Research

OpenAI’s AutoDIME: Automating Multi-Agent Environment Design for RL Agents

In the new paper AutoDIME: Automatic Design of Interesting Multi-Agent Environments, an OpenAI research team explores automatic environment design for multi-agent environments using an RL-trained teacher that samples environments to maximize student learning. The work demonstrates that intrinsic teacher rewards are a promising approach for automating both single and multi-agent environment design.

AI Machine Learning & Data Science Research

UC Berkeley’s Sergey Levine Says Combining Self-Supervised and Offline RL Could Enable Algorithms That Understand the World Through Actions

In the new paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine argues that a general, principled, and powerful framework for utilizing unlabelled data can be derived from reinforcement learning to enable machine learning systems leveraging large datasets to understand the real world.

AI Machine Learning & Data Science Research

DeepMind & IDSIA Introduce Symmetries to Black-Box MetaRL to Improve Its Generalization Ability

In the paper Introducing Symmetries to Black Box Meta Reinforcement Learning, a research team from DeepMind and The Swiss AI Lab IDSIA explores the role of symmetries in meta generalization and shows that introducing more symmetries to black-box meta-learners can improve their ability to generalize to unseen action and observation spaces, tasks, and environments.

AI Machine Learning & Data Science Research

CMU, Google & UC Berkeley Propose Robust Predictable Control Policies for RL Agents

A research team from Carnegie Mellon University, Google Brain and UC Berkeley proposes a robust predictable control (RPC) method for learning reinforcement learning policies that use fewer bits of information. This simple and theoretically-justified algorithm achieves much tighter compression, is more robust, and generalizes better than prior methods, achieving up to 5× higher rewards than a standard information bottleneck.

AI Machine Learning & Data Science Research

Pieter Abbeel Team’s Decision Transformer Abstracts RL as Sequence Modelling

A research team from UC Berkeley, Facebook AI Research and Google Brain abstracts Reinforcement Learning (RL) as a sequence modelling problem. Their proposed Decision Transformer simply outputs optimal actions by leveraging a causally masked transformer, yet matches or exceeds state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.

AI Machine Learning & Data Science Research

What Matters in Adversarial Imitation Learning? Google Brain Study Reveals Valuable Insights

A research team from Google Brain conducts a comprehensive empirical study on more than fifty choices in a generic adversarial imitation learning framework and explores their impacts on large-scale (>500k trained agents) continuous-control tasks to provide practical insights and recommendations for designing novel and effective AIL algorithms.

AI Machine Learning & Data Science Research

Yoshua Bengio Team’s Recurrent Independent Mechanisms Endow RL Agents With Out-of-Distribution Adaptation and Generalization Abilities

A research team from the University of Montreal and Max Planck Institute for Intelligent Systems constructs a reinforcement learning agent whose knowledge and reward function can be reused across tasks, along with an attention mechanism that dynamically selects unchangeable knowledge pieces to enable out-of-distribution adaptation and generalization.