Approximability of Discriminators Implies Diversity in GANs

Authors: Yu Bai, Tengyu Ma, Andrej Risteski

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our preliminary experiments show that on synthetic datasets the test IPM is well correlated with KL divergence or the Wasserstein distance, indicating that the lack of diversity in GANs may be caused by the sub-optimality in optimization instead of statistical inefficiency.
Researcher Affiliation Academia Yu Bai Stanford University yub@stanford.edu Tengyu Ma Stanford University tengyuma@stanford.edu Andrej Risteski MIT risteski@mit.edu
Pseudocode Yes Algorithm 1 Discriminator family with restricted approximability for degenerate manifold
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We design synthetic datasets, set up suitable generators, and train GANs with either our theoretically proposed discriminator class with restricted approximability, or vanilla neural network discriminators of reasonable capacity. [...] We set the ground truth distribution to be a unit circle or a Swiss roll curve, sampled from Circle : (x, y) Uniform( (x, y) : x2 + y2 = 1 ) Swiss roll : (x, y) = (z cos(4πz), z sin(4πz)) : z Uniform([0.25, 1]).
Dataset Splits No The paper does not provide specific dataset split information (e.g., percentages, sample counts, or detailed methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'RMSProp optimizer (Tieleman & Hinton, 2012)' and the 'POT package' but does not specify version numbers for these or other software dependencies, which is required for reproducibility.
Experiment Setup Yes The generator architecture is 2-50-50-2, and the discriminator architecture is 2-50-50-1. We use the RMSProp optimizer (Tieleman & Hinton, 2012) as our update rule, the learning rates are 10 4 for both the generator and discriminator, and we perform 10 steps on the discriminator in between each generator step.