starter from "How to Train a GAN?" at NIPS2016
(this list is no longer maintained, and I am not sure how relevant it is in 2020)
While research in Generative Adversarial Networks (GANs) continues to improve the fundamental stability of these models, we use a bunch of tricks to train them and make them stable day to day.
Here are a summary of some of the tricks.
If you find a trick that is particularly useful in practice, please open a Pull Request to add it to the document. If we find it to be reasonable and verified, we will merge it in.
In GAN papers, the loss function to optimize G is
min (log 1-D), but in practice folks practically use
max log D- because the first formulation has vanishing gradients early on - Goodfellow et. al (2014)
In practice, works well: - Flip labels when training generator: real = fake, fake = real
while lossD > A: train D while lossG > B: train G