On Incorporating Inductive Biases into VAEs

Authors: Ning Miao, Emile Mathieu, Siddharth N, Yee Whye Teh, Tom Rainforth

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show their superiority compared with baseline methods in both generation and feature quality, most notably providing state-of-the-art performance for learning sparse representations in the VAE framework.
Researcher Affiliation Academia 1Department of Statistics, University of Oxford, 2University of Edinburgh
Pseudocode No The paper describes the computational steps and formulas, but does not contain a structured pseudocode or algorithm block that is clearly labeled as such.
Open Source Code Yes Accompanying code is provided at https://github.com/Ning Miao/Inte L-VAE.
Open Datasets Yes For real datasets, We load MNIST, Fashion-MNIST, and Celeb A directly from Tensorflow (Abadi et al., 2015)
Dataset Splits Yes Dataset sizes Unlimited 55k/5k/10k 55k/5k/10k 10k/1k/2k 163k/20k/20k Input space R2 Binary 28x28 Binary 28x28 Binary 28x28 RGB 64x64x3
Hardware Specification Yes All experiments are run on a GTX-1080-Ti GPU.
Software Dependencies No The paper mentions using 'Tensorflow' but does not provide specific version numbers for it or any other key software dependencies or libraries used in the experiments.
Experiment Setup Yes Table C.1: Hyperparameters used for different experiments. This table specifies 'Batch size', 'Optimizer Adam', and 'Learning rate'.