AmbientGAN: Generative models from lossy measurements
Authors: Ashish Bora, Eric Price, Alexandros G. Dimakis
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines. |
| Researcher Affiliation | Academia | Ashish Bora Department of Computer Science University of Texas at Austin ashish.bora@utexas.edu Eric Price Department of Computer Science University of Texas at Austin ecprice@cs.utexas.edu Alexandros G. Dimakis Department of Electrical and Computer Engineering University of Texas at Austin dimakis@austin.utexas.edu |
| Pseudocode | No | The paper describes algorithms in prose and mathematical formulations but does not include formal pseudocode blocks or algorithms. |
| Open Source Code | Yes | code reused from https://github.com/carpedm20/DCGAN-tensorflow; code reused from https://github.com/igul222/improved_wgan_training |
| Open Datasets | Yes | MNIST is a dataset of 28 28 images of handwritten digits [Le Cun et al. (1998)]. Celeb A is a dataset of face images of celebrities [Liu et al. (2015)]. We use an aligned and cropped version where each image is 64 64 RGB. The CIFAR-10 dataset consists of 32 32 RGB images from 10 different classes [Krizhevsky & Hinton (2009)]. |
| Dataset Splits | No | The paper mentions training, testing, and standard datasets but does not explicitly provide details about training/validation/test splits, percentages, or sample counts for reproducibility beyond mentioning standard benchmarks. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running the experiments. It only mentions general setups implicitly by discussing training of neural networks. |
| Software Dependencies | No | The paper mentions the use of TensorFlow (implied by github links such as tensorflow.org and carpedm20/DCGAN-tensorflow) and specific models/codebases (DCGAN, WGANGP, ACWGANGP) but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | For the MNIST dataset, we use two GAN models. The first model is a conditional DCGAN which follows the architecture in [Radford et al. (2015)], while the second model is an unconditional Wasserstein GAN with gradient penalty (WGANGP) which follows the architecture in [Gulrajani et al. (2017)]. For the celeb A dataset, we use an unconditional DCGAN and follow the architecture in [Radford et al. (2015)]. For the CIFAR-10 dataset, we use an Auxiliary Classifier Wasserstein GAN with gradient penalty (ACWGANGP) which follows the residual architecture in [Gulrajani et al. (2017)]. More details on architectures and hyperparameters can be found in the appendix. |