Towards a Better Global Loss Landscape of GANs
Authors: Ruoyu Sun, Tiantian Fang, Alexander Schwing
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic data show that the predicted bad basin can indeed appear in training. We also perform experiments to support our theory that Rp GAN has a better landscape than separable-GAN. For instance, we empirically show that Rp GAN performs better than separable-GAN with relatively narrow neural nets. |
| Researcher Affiliation | Academia | Ruoyu Sun , Tiantian Fang, Alex Schwing University of Illinois at Urbana-Champaign ruoyus,tf6,aschwing@illinois.edu |
| Pseudocode | No | The paper does not contain any blocks explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | The code is available at https://github.com/Ailsa F/RS-GAN. |
| Open Datasets | Yes | For setting (A), we test on CIFAR-10 and STL-10 data. |
| Dataset Splits | No | The paper refers to using CIFAR-10 and STL-10, which have standard splits, but it does not explicitly state the train/validation/test percentages or sample counts for these datasets. It mentions evaluating generated samples (50k and 10k) but not the dataset splits themselves. |
| Hardware Specification | No | The paper does not specify any particular hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify software dependencies like programming language versions or library versions (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | For the optimizer, we use Adam with the discriminator s learning rate 0.0002. For CIFAR-10 on Res Net, we set β1 = 0 and β2 = 0.9 in Adam; for others, β1 = 0.5 and β2 = 0.999. We tune the generator s learning rate and run 100k iterations in total. |