AI Machine Learning & Data Science Research

UC Berkeley’s FastRLAP Learns Aggressive and Effective High-Speed Driving Strategies With <20 Minutes of Real-World

In the new paper FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing, a UC Berkeley research team proposes FastRLAP (Fast Reinforcement Learning via Autonomous Practicing), a system that autonomously practices in the real world and learns aggressive maneuvers to enable effective high-speed driving.

High-speed driving is thrilling, but presents serious challenges for humans and vision-based AI navigation models alike. The accelerated pace lessens the available reaction time for collision-free navigation and requires controllers that can handle both the vehicle’s dynamics and perceived obstacles under such conditions. Many prior approaches to this task have relied on imitation learning, but these require expert human demonstrations. Could effective high-speed driving performance be achieved by adapting navigational strategies to the vehicle autonomously?

A UC Berkeley research team explores this possibility in the new paper FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing, proposing Fast Reinforcement Learning via Autonomous Practicing. The FastRLAP system leverages sample-efficient end-to-end reinforcement learning and autonomous practicing in the real world to efficiently learn the “aggressive maneuvers” that enable high-speed driving.

The proposed FastRLAP comprises three main components: a finite state machine (FSM) that selects the next checkpoint for the online RL policy and automatically recovers from collisions to enable autonomous real-world practicing, a pretrained representation of visual observations that captures driving-specific features such as free space and obstacles, and a sample-efficient RL algorithm for online learning.

The RL policy is trained in the real world to reach FSM-indicated goals and improves by learning aggressive driving maneuvers in challenging environments. The researchers also bootstrap the RL policy with an offline representation of navigation-specific visual features learned from prior data to boost computational and sample-training efficiency.

In their empirical study, the team applied FastRLAP to a small RC car in various real-world environments, where it consistently recorded faster lap times and fewer collisions than ImageNet Pre-Training and Offline Rl baselines. Moreover, FastRLAP was able to learn its effective high-speed driving strategies with under 20 minutes of real-world training,

The team believes FastRLAP’s effective image-based high-speed driving abilities could also help advance the use of RL-based systems for learning complex and highly performant navigation skills in other real-world applications.

The FastRLAP code, additional experimental results and videos are available at sites.google.com/view/fastrlap. The paper FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

0 comments on “UC Berkeley’s FastRLAP Learns Aggressive and Effective High-Speed Driving Strategies With <20 Minutes of Real-World

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: