DISTRIBUTIONAL CONCAVITY REGULARIZATION FOR GANS

Authors: Shoichiro Yamaguchi, Masanori Koyama

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We will not only show that our DC regularization can achieve highly competitive results on ILSVRC2012 and CIFAR datasets in terms of Inception score and Fr echet inception distance, but also provide a mathematical guarantee that our method can always increase the entropy of the generator distribution. 4 EXPERIMENTAL RESULTS We applied DC regularization to the training of GANs on CIFAR-10 (Torralba et al., 2008), CIFAR100 (Torralba et al., 2008) and ILSVRC2012 dataset (Image Net) (Russakovsky et al., 2015) in various settings and evaluated its performance in terms of Inception score (Salimans et al., 2016) and Fr echet inception distance (FID) (Heusel et al., 2017).
Researcher Affiliation Industry Shoichiro Yamaguchi, Masanori Koyama Preferred Networks {guguchi, masomatics}@preferred.jp
Pseudocode Yes Algorithm 1 GANs algorithm with DC regularization
Open Source Code No The paper does not contain an explicit statement about open-sourcing the code for the described methodology or a direct link to a code repository.
Open Datasets Yes We applied DC regularization to the training of GANs on CIFAR-10 (Torralba et al., 2008), CIFAR100 (Torralba et al., 2008) and ILSVRC2012 dataset (Image Net) (Russakovsky et al., 2015) in various settings and evaluated its performance in terms of Inception score (Salimans et al., 2016) and Fr echet inception distance (FID) (Heusel et al., 2017).
Dataset Splits No The paper mentions using standard datasets like CIFAR-10 and ImageNet but does not explicitly state the training, validation, and test splits (e.g., percentages or sample counts) used for the experiments. It refers to evaluation on generated samples and real data, but not the partitioning strategy for the main dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions the Adam optimizer but does not specify any software dependencies (e.g., programming languages, libraries, or frameworks) along with their version numbers required to reproduce the experiments.
Experiment Setup Yes For the optimization, we used Adam(Kingma & Ba, 2015) and chose (α = 0.0002, β1 = 0, β2 = 0.9) for the hyperparameters. Also, we chose ndis = 1, ngen = 1 for SNDCGAN and (ndis = 5, ngen = 1) for SNRes Net. We updated the generator 100k times and linearly decayed the learning rate over last 5k iterations. We also set λ = 3.0, d = 0.01 for the parameters in (13). We set λ = 6.0, d = 0.01 for the parameters in equation (13).