Quantitatively Evaluating GANs With Divergences Proposed for Training

Authors: Daniel Jiwoong Im, He Ma, Graham W. Taylor, Kristin Branson

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To compare these and previous metrics for evaluating GANs, we performed many experiments, training and comparing multiple types of GANs with multiple architectures on multiple data sets. We qualitatively and quantitatively compared these metrics to human perception
Researcher Affiliation Collaboration 1Janelia Research Campus, HHMI, 2AIFounded Inc. 3University of Guelph, 4Vector Institute
Pseudocode Yes Algorithm 1 Compute the divergence/distance. 1: procedure DIVERGENCECOMPUTATION(Dataset {Xtr, Xte}, generator Gθ, learning rate η, evaluation criterion J(ϕ, X, Y )) 2: Initialize critic network parameter ϕ. 3: for i = 1 N do 4: Sample data points from X, {xm} Xtr. 5: Sample points from generative model, {sm} Gθ. 6: ϕ ϕ + η ϕJ({xm}, {sm}; ϕ). 7: Sample points from generative model, {sm} Gθ. 8: return J(ϕ, Xte, {sm}).
Open Source Code No The paper mentions using 'pre-trained GANs downloaded from (pyt)' and provides a URL for 'pytorch-generative-model-collections'. However, it does not explicitly state that the code for *their proposed methodology or evaluation metrics* is made open source by the authors of *this* paper.
Open Datasets Yes In our experiments, we considered the MNIST (Le Cun et al., 1998), CIFAR10, LSUN Bedroom, and Fashion MNIST datasets.
Dataset Splits Yes MNIST... From the 60,000 training examples, we set aside 10,000 as validation examples to tune various hyper-parameters. ... The CIFAR10 dataset ... We used 45,000, 5,000, and 10,000 examples as training, validation, and test data, respectively. ... The LSUN Bedroom dataset ... From the 3,033,342 images, we used 90,000 images as training data and 90,000 images as validation data.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'pytorch-generative-model-collections', 'Inception Network', and 'Res Net', but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Hyperparameters. Table 10 in the Appendix shows the learning rates and the convolutional kernel sizes that were used for each experiment. The architecture of each network is presented in the Appendix in Figure 10. Additionally, we used exponential-mean-square kernels with several different sigma values for MMD.