On the Discrimination-Generalization Tradeoff in GANs
Authors: Pengchuan Zhang, Qiang Liu, Dengyong Zhou, Tao Xu, Xiaodong He
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we will test our analysis of the consistency of GAN objective and likelihood objective on two toy datasets, e.g., a 2D Gaussian dataset and a 2D 8-Gaussian mixture dataset. |
| Researcher Affiliation | Collaboration | Pengchuan Zhang Microsoft Research, Redmond penzhan@microsoft.com; Qiang Liu Computer Science, Dartmouth College qiang.liu@dartmouth.edu; Dengyong Zhou Google dennyzhou@google.com; Tao Xu Computer Science, Lehigh University tax313@lehigh.edu; Xiaodong He Microsoft Research, Redmond xiaohe@microsoft.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper mentions 'a 2D Gaussian dataset' and 'a 2D 8-Gaussian mixture dataset' as toy datasets. However, it does not provide concrete access information (link, DOI, specific citation with authors/year, or mention of a well-established benchmark) for these datasets, which are essential for reproducibility. |
| Dataset Splits | No | The paper states: 'We take 10^5 samples for training, and 1000 samples for testing.' This provides details on training and testing sample sizes but does not mention a separate validation split or how these splits are defined or maintained for reproducibility beyond just the counts. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions). |
| Experiment Setup | Yes | We train the generative model by WGAN with weight clipping. In the first experiment, the discriminator set is a neural network with one hidden layer and 500 hidden neurons... we take the discriminators to be the logdensity ratio between two Gaussian distributions, which are quadratic polynomials... Our generator assume that there are 8 Gaussian components and they have equal weights, and thus our generator does not have any modeling error. The training parameters are eight sets of scaling and biasing parameters in Eqn. (33), each for one Gaussian component. ...We use an MLP with 4 hidden layers and relu activations as the discriminator set. |