AI Research

Georgia Tech & Google Brain’s GAN Lab Visualizes Model Training in Browsers

Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of complex machine learning model Generative Adversarial Networks (GANs). Even machine learning newbs can now experiment with GAN models using only a common web browser.

Georgia Tech and Google Brain researchers have introduced the new interactive tool GAN Lab, which visually presents the training process of complex machine learning model Generative Adversarial Networks (GANs). Even machine learning newbs can now experiment with GAN models using only a common web browser.

This real-time visualization tool is deployed using Google’s new machine learning JavaScript library TensorFlow.js. Users can check, step-by-step, to see how a GAN learns the distribution of points in a 2D (x,y) space. Readers interested in the software details can download GAN Lab’s open-sourced code from GitHub.

Core components of a GAN framework include a generator and a discriminator. The generator creates fake data instances and the discriminator attempts to distinguish the fakes from real data. Both components improve themselves as the model is trained: the more they compete with each other the more realistic-looking outputs the generator will produce.

image (3)
Schematic of a commonly used GAN architecture

GAN Lab’s interactive features enable users to conduct and control experiments. Users can start a GAN training session by selecting a built-in sample distribution or draw one themselves. An attendant animation displays the function of each model module in real time. Users can manually play back the animation in slow motion for more detailed analysis.

Users can also train individual iterations. Hyperparameters such as the number of hidden layers and neurons in the network, loss functions, optimization algorithms and model learning rate are all configurable for GAN model training.

GAN Lab.gif
Training a simple distribution of datapoints using GAN Lab

After hitting the “play” button, GAN Lab begins animating the entire process of input-to-output transformation from noise to fake samples. The fake samples’ positions and distributions are continuously updated and they begin to overlap with real samples.

The discriminator’s decision boundary is presented in the layered distributions view as a 2D heatmap. When model training begins the discriminator can easily classify real and fake, and most data samples fall into correspondingly coloured regions.

gan.jpg
A good performance of the discriminator interpreted through a 2D heatmap


GAN Lab’s layered distributions view also shows how the fakes move toward the generator’s gradient direction, which is determined by the current location of fake samples in the discriminator’s classification. As training progresses, the loss function value decreases and model accuracy increases.

movement direction ganlab.jpg
Fake sample movements directed by the generator’s gradients

Responding to the generator’s improvements, the discriminator constantly updates its decision boundary to identify fakes. As a GAN model approaches optimization, all samples will appear in a region where the heatmap is mostly gray, suggesting fake samples are so realistic that the discriminator can barely tell the difference them and the real samples.

GAN Lab was co-developed by Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas and Martin Wattenberg. Their paper GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation is available at arXiv and has been accepted by the respected academic journal IEEE Transactions on Visualization and Computer Graphics (TVCG).

Source: Synced China


Localization: Tingting Cao | Editor: Michael Sarazen

 

0 comments on “Georgia Tech & Google Brain’s GAN Lab Visualizes Model Training in Browsers

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: