CAGAN: Consistent Adversarial Training Enhanced GANs
Authors: Yao Ni, Dandan Song, Xi Zhang, Hao Wu, Lejian Liao
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that our method can obtain state-of-the-art Inception scores of 9.17 and 10.02 on supervised CIFAR-10 and unsupervised STL10 image generation tasks, respectively, as well as achieve competitive semi-supervised classification results on several benchmarks. Importantly, we demonstrate that our method can maintain stability in training and alleviate mode collapse. |
| Researcher Affiliation | Academia | Yao Ni, Dandan Song, Xi Zhang, Hao Wu and Lejian Liao Lab of High Volume language Information Processing & Cloud Computing Beijing Lab of Intelligent Information Technology School of Computer Science & Technology, Beijing Institute of Technology {niyao, sdd, xi zhang, hao wu, liaolj}@bit.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific link, explicit statement of release in supplementary materials) for its source code. |
| Open Datasets | Yes | To investigate the effectiveness of our CAGAN on image generation task, we conduct experiments on two benchmark datasets: CIFAR-10 [Krizhevsky, 2009] and STL-10 [Coates et al., 2011]. CIFAR-10 contains 50,000 labeled training images of size 32 32 from 10 classes. ... STL-10 is subsampled from Image Net which is more diverse than CIFAR-10, and it contains 100,000 unlabeled images of size 96 96. ... MNIST, SVHN, and CIFAR-10. |
| Dataset Splits | No | The paper mentions using training images and an entire training set for unsupervised training but does not explicitly provide details about specific training/validation/test splits, percentages, or a defined validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' but does not specify any software dependencies with version numbers (e.g., programming language version, framework version, library versions). |
| Experiment Setup | Yes | We keep the hyper-parameters all the same on CIFAR-10 and STL-10 for all the experiments. In particular, we follow original WGAN-GP set λGP = 10, mini-batch size of 64 when training D, and mini-batch size of 128 when training G. We use Adam optimizer with a learning rate of 0.0002, β1 = 0, β2 = 0.9 to train G and D, and the learning rate is decreased linearly to 0. For consistent adversarial hyper-parameters, we set λf = 0.1, λCA = 2, and training totally 700 epochs. |