Machine Learning & Data Science

Fast and Furious! DRL-Fuelled Agents Grab Pole Position in Gran Turismo Sport

University of Zurich and ETH Zurich's deep reinforcement learning agents have achieved superhuman performance in the time trials of popular racing simulator game Gran Turismo Sport.

Deep reinforcement learning (DRL) agents trained by researchers from the University of Zurich and ETH Zurich have achieved superhuman performance in the time trials of popular racing simulator game Gran Turismo Sport.

Whether on real roads or in simulations, autonomous driving at high speeds remains a challenging task as it requires fast and precise operations while pushing vehicles’ physical limits to the extreme. Although AI agents have shown promising results in recent years in race simulations, expert human drivers have still dominated competitions. Until now.

The Swiss researchers choose the popular 2017 Sony racing game Gran Turismo Sport (GTS), which is known for its detailed car and track physics simulations, as the platform to test their DRL agents. The goal was to build a neural network controller that could autonomously navigate a race car without prior knowledge of the car’s dynamics and have it complete a lap of the track as quickly as possible “without overshooting into the track’s walls.”

Unlike previous methods that rely on classical trajectory planning and control, the new approach leverages reinforcement learning to train a deep sensorimotor policy that directly maps from observations to control commands. The researchers first define a reward function that formulates the racing problem, and accordingly, a neural network policy maps input states to actions. The policy parameters are then optimized via maximizing the reward function as the agents learn to drive autonomously on different tracks using various cars at high speed.

image.png
image.png
image.png

In experiments, the proposed DRL agents beat the built-in Gran Turismo Sport NPCs (non-player characters) and bettered the fastest personal-best lap times of over 50,000 human drivers. The researchers attribute the success to their agents’ ability to self-learn track trajectories that were qualitatively similar to those chosen by the best human players while maintaining slightly higher speeds through curves.

Including training and evaluation, it took the team less than 73 hours to deliver the DRL agents. Although their research was limited to time trials conducted without other cars on the track, the team plans to use more data-efficient RL algorithms such as meta-RL to push their speedsters to additional challenges.

The paper Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning is available on arXiv.


Reporter: Fangyu Cai | Editor: Michael Sarazen


Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

0 comments on “Fast and Furious! DRL-Fuelled Agents Grab Pole Position in Gran Turismo Sport

Leave a Reply

Your email address will not be published.

%d bloggers like this: