ProbGAN: Towards Probabilistic GAN with Theoretical Guarantees
Authors: Hao He, Hao Wang, Guang-He Lee, Yonglong Tian
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evidence on synthetic high-dimensional multi-modal data and image databases (CIFAR-10, STL-10, and Image Net) demonstrates the superiority of our method over both start-of-the-art multi-generator GANs and other probabilistic treatment for GANs. In this section, we evaluate our model with two inference algorithms proposed in Section 3.3 (denoted as Prob GAN-GMA and Prob GAN-PSA). |
| Researcher Affiliation | Academia | Hao He, Hao Wang, Guang-He Lee, Yonglong Tian Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {haohe,hwang87,guanghe,yonglong}@mit.edu |
| Pseudocode | Yes | Algorithm 1: Our Adapted SGHMC Inference Algorithm |
| Open Source Code | No | We will release our evaluation code soon. |
| Open Datasets | Yes | We evaluate our method on 3 widely-adopted datasets: CIFAR-10 (Krizhevsky et al., 2010), STL-10 (Coates et al., 2011) and Image Net (Deng et al., 2009). |
| Dataset Splits | No | The paper mentions 'CIFAR-10 has 50k training and 10k test' but does not specify details of a validation set split (e.g., percentages, sample counts, or methodology for creating a validation split). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Tensorflow' and 'Py Torch (Paszke et al., 2017)' but does not provide specific version numbers for these or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | For a fair comparison with baselines, we use the same settings as MGAN. We resize the STL-10 and Image Net images down to 48x48 and 32x32 respectively. ... All models are optimized by Adam(Kingma & Ba, 2014) with a learning rate of 2 104. For probabilistic methods, the SGHMC noise factor is set as 3 102. Following the configuration in MGAN, the batch size of generators and discriminators are 120 and 64. |