AI Research

BigGAN Trained With Only 4 GPUs!

Andrew Brock, first author of the high-profile research paper Large Scale GAN Training for High Fidelity Natural Image Synthesis (aka “BigGAN”), has posted a GitHub repository of an unofficial PyTorch BigGAN implementation that requires only 4-8 GPUs to train the model.

Andrew Brock, first author of the high-profile research paper Large Scale GAN Training for High Fidelity Natural Image Synthesis (aka “BigGAN”), has posted a GitHub repository of an unofficial PyTorch BigGAN implementation that requires only 4-8 GPUs to train the model.

image (24)

The BigGAN paper has become tremendously popular since it was published last September. The revolutionary neural network can generate high-quality target images with vivid colours and realistic backgrounds. The method is however very compute-heavy, and properly training BigGAN requires at least 128 Google’s TPU v3, which is beyond the means of most any developer. The new PyTorch BigGAN implementation on GitHub requires only 4-8 GPUs. This highly reduced tech threshold is being celebrated across the AI community.

image (25).png

The release of the PyTorch BigGAN implementation contains code for training, testing, sampling script and complete pre-trained checkpoints (generator, discriminator and optimizer) for users to more easily adjust their data and training models from the beginning.

In the Github repository Brock highlights the code design, which is operable and expandable for future research:

  • The implementation contains the full training and metrics log for reference, so users won’t suffer too much checking logs while re-implementing the model.
  • The repo includes an accelerated FID calculation that can reduce matrix sqrt calculation from upwards of 10 minutes (with the original SciPy version) to just seconds (with accelerated PyTorch version).
  • The repo includes an accelerated, low-memory consumption ortho reg implementation.
  • By default, the repo only computes the top singular value (the spectral norm), but the provided code supports computing more SVs through the --num_G_SVs argument.

image (26).png

Building on the popularity of the original BigGAN, the significantly reduced computing power requirements along with the handy tips from the author will enable thousands of previously underequipped researchers to start re-implementing their own BigGANs in PyTorch.


Author: Victor Lu | Editor: Michael Sarazen

1 comment on “BigGAN Trained With Only 4 GPUs!

  1. This dog’s figure is so beautiful. Sorry if I guessed the wrong gender, but this figure should be that of a girl

Leave a Reply

Your email address will not be published. Required fields are marked *