Teaching a GAN What Not to Learn
Authors: Siddarth Asokan, Chandra Seelamantula
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The advantage of the reformulation is demonstrated by means of experiments conducted on MNIST, Fashion MNIST, Celeb A, and CIFAR-10 datasets. |
| Researcher Affiliation | Academia | Siddarth Asokan Robert Bosch Center for Cyber-Physical Systems Indian Institute of Science Bangalore, India siddartha@iisc.ac.in Chandra Sekhar Seelamantula Department of Electrical Engineering Indian Institute of Science Bangalore, India css@iisc.ac.in |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on MNIST [28], Fashion-MNIST [29], Celeb A [30] and CIFAR-10 [31] datasets. |
| Dataset Splits | No | The paper describes how positive/negative classes and minority classes are constructed from the datasets, but it does not specify traditional training/validation/test dataset splits with exact percentages, sample counts, or citations to predefined splits for general model training. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | Yes | The GAN models are coded in Tensor Flow 2.0 [32]. |
| Experiment Setup | Yes | In all the cases, latent noise is drawn from a 100-dimensional standard Gaussian N(0100, I100). The ADAM optimizer [34] with learning rate η = 10 4 and exponential decay parameters for the first and second moments β1 = 0.50 and β2 = 0.999 is used for training both the generator and the discriminator. A batch size of 100 is used for all the experiments and all models were trained for 100 epochs, unless stated otherwise. |