Implicit competitive regularization in GANs
Authors: Florian Schaefer, Hongkai Zheng, Animashree Anandkumar
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we use an existing implementation of WGAN-GP and show that by training it with CGD without any explicit regularization, we can improve the inception score (IS) on CIFAR10, without any hyperparameter tuning. |
| Researcher Affiliation | Academia | 1Caltech 2Shanghai Jiao Tong university 3This work was produced while HZ was a visiting undergraduate researcher at Caltech. Correspondence to: Florian Schaefer <schaefer@caltech.edu>, Hongkai Zheng <devzhk@sjtu.edu.cn>. |
| Pseudocode | No | The paper describes methods and processes in narrative text and mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology or provide a link to a code repository. |
| Open Datasets | Yes | In comprehensive experiments on CIFAR 10, competitive gradient descent stabilizes previously unstable GAN formulations and achieves higher inception score compared to a wide range of explicit regularizers, using both WGAN loss and the original saturating GAN loss of Goodfellow et al. (2014). |
| Dataset Splits | No | The paper mentions training on CIFAR10 but does not provide specific details about dataset splits (e.g., percentages, sample counts, or explicit references to predefined validation splits). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions using a 'Pytorch implementation of inception score' but does not specify version numbers for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | When using Adam on OGAN, we stick to the common practice of replacing the generator loss by Ex PG [ log (D(x)]], as this has been found to improve training stability (Goodfellow et al., 2014; 2016). In order to be generous to existing methods, we use an existing architecture intended for the use with WGAN gradient penalty (Gulrajani et al., 2017). As regularizers, we consider no regularization (NOREG), ℓ2 penalty on the discriminator with different weights (L2), Spectral normalization (Miyato et al., 2018) on the discriminator (SN), or 1-centered gradient penalty on the discriminator, following (Gulrajani et al., 2017) (GP). Following the advice in (Goodfellow et al., 2016) we train generator and discriminator simultaneously, with the exception of WGAN-GP and Adam, for which we follow (Gulrajani et al., 2017) in making five discriminator updates per generator update. |