Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse

Authors: James Lucas, George Tucker, Roger B. Grosse, Mohammad Norouzi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present empirical evidence found from studying two distinct claims. First, we verify our theoretical analysis of the linear VAE model. Second, we explore to what extent these insights apply to deep nonlinear VAEs.
Researcher Affiliation Collaboration University of Toronto Google Brain
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://sites.google.com/view/dont-blame-the-elbo
Open Datasets Yes We ran two sets of experiments on 1000 randomly chosen MNIST images. ... We trained VAEs with Gaussian observation models on the MNIST [27] and Celeb A [28] datasets.
Dataset Splits No The paper mentions training and evaluation on datasets but does not specify explicit validation dataset splits, proportions, or a distinct methodology for a validation set.
Hardware Specification No The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100), CPU models (e.g., Intel Xeon), or memory specifications used for running the experiments. While affiliations like 'Google Brain' might imply access to powerful hardware, no concrete specifications are listed.
Software Dependencies No The paper mentions TensorFlow in the acknowledgements and references (e.g., [1]), but does not specify a version number. No other software dependencies with specific version numbers are provided for reproducibility.
Experiment Setup Yes We trained MNIST VAEs with 2 hidden layers in both the decoder and encoder, Re LU activations, and 200 latent dimensions. We first evaluated training with fixed values of the observation noise, σ2. ... Then, we consider the setting where the observation noise and VAE weights are learned simultaneously.