This repository implements (PyTorch) the Dirac-GAN proposed in the paper "Which Training Methods for GANs do actually Converge?" by Mescheder et al. [1]. The original implementation of the authors can be found here.
This work was done as part of the lecture Deep Generative Models at TU Darmstadt held by Dr. Anirban Mukhopadhyay.
Parts of this implementation are taken from my recent mode collapse example repository.
Standard GAN loss | Non-saturating GAN loss | Wasserstein GAN |
![]() |
![]() |
![]() |
Wasserstein GAN loss + GP | Least squares GAN | Hinge GAN |
![]() |
![]() |
![]() |
DRAGAN loss | ||
![]() |
This repository implements the following GAN losses and regularizers.
Method | Generator loss | Discriminator loss |
---|---|---|
Original GAN loss | ||
Non-saturating GAN loss | ||
Wasserstein GAN loss | ||
Wasserstein GAN loss + grad. pen. | ||
Least squares GAN loss | ||
Hinge GAN | ||
DRAGAN |
Method | Generator loss |
---|---|
Dirac-GAN is written in PyTorch 1.8.1. No GPU is required! All additional dependencies can be seen in the requirements.txt
file. To install all dependencies simply run:
pip install -r requirements.txt
Older version of PyTorch may also allows running the code without issues.
The implementation provides a simple GUI to run all Dirac-GAN experiments with different settings. Simply run:
python main.py
Set the desired parameters in the GUI and click on "Run training" to perform training. This could take a few seconds. If the training is finished all results are plotted and shown.
[1] @inproceedings{Mescheder2018,
title={Which training methods for GANs do actually converge?},
author={Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian},
booktitle={International conference on machine learning},
pages={3481--3490},
year={2018},
organization={PMLR}
}