Smoothness and Stability in GANs
Authors: Casey Chu, Kentaro Minami, Kenji Fukumizu
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically test the theoretical learning rate given by Theorem 1 and Proposition 1 as well as our regularization scheme (7) based on inf-convolutions. ... We trained each model for 100,000 steps on CIFAR-10 and evaluate each model using the Fr´echet Inception Distance (FID) of Heusel et al. (2017). |
| Researcher Affiliation | Collaboration | Casey Chu Stanford University caseychu@stanford.edu Kentaro Minami Preferred Networks, Inc. minami@preferred.jp Kenji Fukumizu The Institute of Statistical Mathematics / Preferred Networks, Inc. fukumizu@ism.ac.jp |
| Pseudocode | No | The paper discusses algorithmic concepts and derivations but does not present any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the methodology or provide a link to a code repository. |
| Open Datasets | Yes | We trained each model for 100,000 steps on CIFAR-10 and evaluate each model using the Fr´echet Inception Distance (FID) of Heusel et al. (2017). |
| Dataset Splits | No | The paper mentions training models for 100,000 steps on CIFAR-10 but does not specify the dataset splits (e.g., percentages or sample counts for training, validation, or test sets). |
| Hardware Specification | No | The paper describes the discriminator as a '7-layer convolutional neural network' and mentions a 'particle-based generator', but does not specify any hardware details such as GPU or CPU models used for training. |
| Software Dependencies | No | The paper mentions using 'spectral normalization' and 'ELU activations' for the discriminator but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | We randomly generated hyperparameter settings for the Lipschitz constant α, the smoothness constant β2, the number of particles N, and the learning rate γ. We trained each model for 100,000 steps on CIFAR-10. |