A Domain Agnostic Measure for Monitoring and Evaluating GANs
Authors: Paulina Grnarova, Kfir Y. Levy, Aurelien Lucchi, Nathanael Perraudin, Ian Goodfellow, Thomas Hofmann, Andreas Krause
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show the effectiveness of this measure to rank different GAN models and capture the typical GAN failure scenarios, including mode collapse and non-convergent behaviours. |
| Researcher Affiliation | Academia | Paulina Grnarova ETH Zurich Kfir Y. Levy Technion-Israel Institute of Technology Aurelien Lucchi ETH Zurich Nathanaël Perraudin Swiss Data Science Center Ian Goodfellow Thomas Hofmann ETH Zurich Andreas Krause ETH Zurich |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of the code for the methodology described. |
| Open Datasets | Yes | We train a vanilla GAN on three toy datasets with increasing difficulty: a) RING: a mixture of 8 Gaussians, b) SPIRAL: a mixture of 20 Gaussians and c) GRID: a mixture of 25 Gaussians. [...] We train a GAN on MNIST [...]. Prog GAN trained on Celeb A. [...] Res Net based GAN variants on Cifar10. |
| Dataset Splits | Yes | Thus we split our dataset into three disjoint subsets: a training set, an adversary finding set, and a test set which are respectively used in phases (i), (ii) and (iii). |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions 'Tensorflow' in a reference but does not specify version numbers for any software components used in their experiments, which is required for reproducibility. |
| Experiment Setup | Yes | In practice, the metrics are computed by optimizing a separate generator/discriminator using a gradient based algorithm. To speed up the optimization, we initialize the networks using the parameters of the adversary at the step being evaluated. Hence, if we are evaluating the GAN at step t, we train vworst for ut and uworst for vt by using vt as a starting point for vworst and analogously, ut as a starting point for uworst for a number of fixed steps. |