On Computation and Generalization of Generative Adversarial Networks under Spectrum Control

Authors: Haoming Jiang, Zhehui Chen, Minshuo Chen, Feng Liu, Dingding Wang, Tuo Zhao

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on CIFAR-10, STL-10, and Imgae Net datasets confirm that compared to other methods, our proposed method is capable of generating images with competitive quality by utilizing spectral normalization and encouraging the slow singular value decay.
Researcher Affiliation Academia Haoming Jiang, Zhehui Chen, Minshuo Chen & Tuo Zhao School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta, GA 30318, USA {jianghm, zhchen, mchen393, tourzhao}@gatech.edu Feng Liu & Dingding Wang Department of Computer & Electrical Engineering and Computer Science Florida Atlantic University Boca Raton, FL 33431, USA {fliu2016, wangd}@fau.edu
Pseudocode Yes Algorithm 1 Adversarial training with Spectrum Control of Discriminator, D Initialization
Open Source Code No The paper does not provide an explicit statement of open-source code release or a link to a public repository.
Open Datasets Yes Our experiments on CIFAR-10, STL-10, and Imgae Net datasets confirm that compared to other methods, our proposed method is capable of generating images with competitive quality by utilizing spectral normalization and encouraging the slow singular value decay.
Dataset Splits No The paper mentions using CIFAR-10, STL-10, and ImageNet datasets but does not explicitly provide details about specific training, validation, and test data splits, such as percentages or sample counts.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper states 'All implementations are done in Chainer' but does not specify a version number for Chainer or any other software dependencies.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2014) with the following hyperparameters: (1) ndis = 1; (2) α = 0.0002, the initial learning rate; (3) β1 = 0.5, β2 = 0.999, the first and second order momentum parameters of Adam respectively. We choose tuning parameters λ = 10 and γ = 1 in all the experiments except for the Divergence regularizer, where we pick λ = 10 and γ = 0.054. We take 100K iterations in all the experiments on CIFAR-10 and 200K iterations on STL-10 as suggested in Miyato et al. (2018).