AI Computer Vision & Graphics Machine Learning & Data Science Research

NVIDIA’s GameGAN Uses AI to Recreate Pac-Man and Other Game Environments

GameGAN, a generative model that learns to visually imitate video game environments by ingesting screenplay and keyboard actions during training.

Before AI agents are deployed in the real world they must undergo extensive testing in challenging simulated environments. Writing the code to build good simulators however is usually highly time-consuming and requires skilled graphics experts. A more scalable way going forward is to learn to simulate by simply observing the dynamics of the real world. While works such as Intel Labs and the University of Texas’ Learning by Cheating approaches the challenge by learning behaviours, these require much supervision.

Aiming at training a game simulator that can model both the deterministic and stochastic nature of environments, researchers from NVIDIA, University of Toronto, Vector Institute and MIT have proposed a simulator that learns by simply watching an agent interact with its environment.

image.png
overview_box.png

Focusing on games as a proxy of real environments — and particularly on the seminal Pac-Man, which turns 40 this year — the researchers propose GameGAN, a generative model that learns to visually imitate video game environments by ingesting screenplay and keyboard actions during training. GameGAN consists of three models: a dynamics engine to maintain an internal state variable that is recurrently updated, an external memory module to remember what the model has generated, and a rendering engine to decode the output image at each time instance.

The core modules are neural networks that are trained end-to-end. During training, GameGAN takes in user commands — screenplay and keyboard actions — then conditions on these to predict the next frame. GameGAN can thus learn from rollouts of image and action pairs without requiring access to the underlying game logic or engine.

image.png

Researchers tested the GameGAN system on a modified version of Pac-Man and the VizDoom environment, conducting both quantitative and qualitative evaluations with four models: Action-LSTM, World Model, GameGAN-M (GameGAN without the memory module and with a simple rendering engine), and the full GameGAN model.

In the experiments, the full GameGAN was shown to produce higher quality results, while supporting multiple practical applications such as transferring a given game from one operating system to another without re-writing any code. In the future, the researchers hope to extend the model to capture more complex real-world environments.

The paper Learning to Simulate Dynamic Environments with GameGAN was accepted to CVPR 2020 and is on arXiv. There is also a project page on GitHub.


Author: Yuqing Li | Editor: Michael Sarazen

0 comments on “NVIDIA’s GameGAN Uses AI to Recreate Pac-Man and Other Game Environments

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: