Learning GANs and Ensembles Using Discrepancy

Authors: Ben Adlam, Corinna Cortes, Mehryar Mohri, Ningshan Zhang

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on toy examples and several benchmark datasets show that DGAN is competitive with other GANs and that EDGAN outperforms existing GAN ensembles, such as Ada GAN.
Researcher Affiliation Collaboration Ben Adlam Google Research New York, NY 10011 adlam@google.com Corinna Cortes Google Research New York, NY 10011 corinna@google.com Mehryar Mohri Google Research & CIMS New York, NY 10012 mohri@google.com Ningshan Zhang New York University New York, NY 10012 nzhang@stern.nyu.edu
Pseudocode Yes Algorithm 1 UPDATE DGAN( t, t, ) and Algorithm 2 UPDATE EDGAN( t, f, ) are present in the paper.
Open Source Code No The paper does not contain any statement about making its own source code publicly available, nor does it provide a link to a code repository for the described methodology.
Open Datasets Yes We show that DGAN obtains competitive results on the benchmark datasets MNIST, CIFAR10, CIFAR100, and Celeb A (at resolution 128 128).
Dataset Splits Yes We then took 50k samples from each generator and the training split of CIFAR10, and embedded these images using a pre-trained classifier.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or other computing resources used for running the experiments.
Software Dependencies No The paper mentions using pre-trained models from TF-Hub but does not specify any software dependencies with version numbers (e.g., TensorFlow version, Python version, specific library versions) that are needed to replicate the experiments.
Experiment Setup No While the paper mentions using 'gradient penalization' and 'weight clipping' and refers to 'standard DCGAN architecture' and an embedding layer size as a 'hyperparameter that can be tuned', it does not provide specific numerical values for hyperparameters (e.g., learning rate, batch size, gradient penalty coefficient, clipping range) or detailed training configurations within the main text. Architectural details are deferred to an appendix not provided.