Improved Training of Generative Adversarial Networks Using Representative Features
Authors: Duhyeon Bang, Hyunjung Shim
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluations show RFGAN effectiveness, improving existing GANs including those incorporating gradient penalty (Kodali et al., 2017; Gulrajani et al., 2017; Fedus et al., 2017). Section 4 summarizes the results of extensive experiments including simulated and real data. The quantitative and qualitative evaluations show that the proposed RFGAN simultaneously improved image quality and diversity. |
| Researcher Affiliation | Academia | Duhyeon Bang 1 Hyunjung Shim 1 1School of Integrated Technology, Yonsei University, South Korea. Correspondence to: Hyunjung Shim <kateshim@yonsei.ac.kr>. |
| Pseudocode | No | The paper describes the model and gradient updates mathematically but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper references a GitHub link (https://github.com/carpedm20/DCGAN-tensorflow) for a baseline DCGAN implementation but does not state that the code for the proposed RFGAN methodology is publicly available or provide a link to it. |
| Open Datasets | Yes | For quantitative and qualitative evaluations, we include simulated and three real datasets: Celeb A (Liu et al., 2015), LSUN-bedroom (Yu et al., 2015), and CIFAR-10 (Krizhevsky & Hinton, 2009), normalizing between -1 and 1. |
| Dataset Splits | No | The paper mentions using training data and generating samples for evaluation but does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) or cross-validation details for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or other computer specifications used for running the experiments. |
| Software Dependencies | No | The paper refers to 'TensorFlow' implicitly through a baseline code link, but it does not provide specific version numbers for TensorFlow or any other software dependencies like programming languages or libraries used in their experiments. |
| Experiment Setup | Yes | input dimensionality is set at (64, 64, 3)... modify network dimensions for the CIFAR-10 dataset, fitting the input into (32, 32, 3)... drew 500 k images randomly from the LSUN bedroom dataset for efficient training and comparison. We use exactly the same hyper-parameters, metrics, and settings throughout this paper, as suggested for a baseline GAN... The WGAN-GP generator is updated once after the discriminator is updated five times. Following the reference code1, other networks are trained by updating the generator twice and the discriminator once. |