Gaussian Process Prior Variational Autoencoders

Authors: Francesco Paolo Casale, Adrian Dalca, Luca Saglietti, Jennifer Listgarten, Nicolo Fusi

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications. ... 4 Experiments We focus on the task of making predictions of unseen images, given specified auxiliary information.
Researcher Affiliation Collaboration Microsoft Research New England, Cambridge (MA), USA Computer Science and Artificial Intelligence Lab, MIT, Cambridge (MA), USA Martinos Center for Biomedical Imaging, MGH, HMS, Boston (MA), USA; Italian Institute for Genomic Medicine, Torino, Italy EECS Department, University of California, Berkeley (CA), USA.
Pseudocode No The paper describes procedures in numbered steps (e.g., for stochastic backpropagation) but does not provide a formal pseudocode or algorithm block.
Open Source Code Yes An implementation of GPPVAE is available at https://github.com/fpcasale/GPPVAE.
Open Datasets Yes We considered a variation of the MNIST dataset... We considered the Face-Place Database (3.0) (Righi et al., 2012)
Dataset Splits Yes We then kept 90% of the data for training and test, and the rest for validation. ... We randomly selected 80% of the data for training (n = 3, 868), 10% for validation (n = 484) and 10% for testing (n = 483).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or programming language versions beyond general reference to Python for the code).
Experiment Setup Yes All models were trained using the Adam optimizer (Kingma and Ba, 2014) with standard parameters and a learning rate of 0.001. ... we used a higher learning rate of 0.01 in this setting. ... We set the dimension of the latent space to L = 16. ... We set the dimension of the latent space to L = 256.