Perceptual Generative Autoencoders
Authors: Zijun Zhang, Ruixiang Zhang, Zongpeng Li, Yoshua Bengio, Liam Paull
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of LPGA and VPGA on three image datasets, MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009), and Celeb A (Liu et al., 2015). For each model and each dataset, we take 5,000 generated samples to compute the FID score. The results (with standard errors of 3 or more runs) are summarized in Table. 1. |
| Researcher Affiliation | Academia | 1University of Calgary, Canada 2MILA, Universit e de Montr eal, Canada 3Wuhan University, China. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/zj10/PGA. |
| Open Datasets | Yes | In this section, we evaluate the performance of LPGA and VPGA on three image datasets, MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky & Hinton, 2009), and Celeb A (Liu et al., 2015). |
| Dataset Splits | No | The paper mentions tuning hyperparameters heuristically but does not provide specific train/validation/test dataset split percentages, counts, or explicit references to standard splits used for validation. |
| Hardware Specification | No | All experiments are performed on a single GPU. This statement is too general and does not provide specific model numbers or types of GPU, CPU, or other hardware components. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch versions). |
| Experiment Setup | Yes | SGD with a momentum of 0.9 is used to train all models. For LPGA, γ (Eq. (9)) tends to vary in a small range for different datasets (e.g., 1.5e 2 for MNIST and CIFAR-10, and 1e 2 for Celeb A). For VPGA, η (Eq. (13)) can vary widely (e.g., 2e 2 for MNIST, 3e 2 for CIFAR-10, and 2e 3 for Celeb A), and thus is slightly more difficult to tune. |