Diagnosing and Enhancing VAE Models
Authors: Bin Dai, David Wipf
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments from Sections 5 and 6 empirically corroborate motivational theory and reveal that the proposed two-stage procedure can generate high-quality samples... |
| Researcher Affiliation | Collaboration | Bin Dai Institute for Advanced Study Tsinghua University Beijing, China daib13@mails.tsinghua.edu.cn David Wipf Microsoft Research Beijing, China davidwipf@gmail.com |
| Pseudocode | No | No pseudocode or algorithm blocks were found. The two-stage method is described in narrative text. |
| Open Source Code | Yes | The code for our model is available at https://github.com/daib13/Two Stage VAE. |
| Open Datasets | Yes | Testing is conducted across four significantly different datasets: MNIST (Le Cun et al., 1998), Fashion MNIST (Xiao et al., 2017), CIFAR-10 (Krizhevsky & Hinton, 2009) and Celeb A (Liu et al., 2015). |
| Dataset Splits | No | No explicit train/validation/test dataset splits (e.g., percentages or sample counts) are provided. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are provided. |
| Software Dependencies | No | All reported FID scores for VAE and GAN models were computed using Tensor Flow (https:// github.com/bioinf-jku/TTUR). No version number specified for TensorFlow. No other software dependencies with version numbers are listed. |
| Experiment Setup | No | The paper states 'No effort was made to tune VAE training hyperparameters (e.g., learning rates, etc.); rather a single generic setting was first agnostically selected and then applied to all VAE-like models', but does not provide specific values for these hyperparameters or detailed network architectures used in their experiments. |