Variational Gaussian Process

Authors: Dustin Tran, Rajesh Ranganath, David Blei

ICLR 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study the VGP on standard benchmarks for unsupervised learning, applying it to perform inference in deep latent Gaussian models (Rezende et al., 2014) and DRAW (Gregor et al., 2015), a latent attention model. For both models, we report the best results to date.
Researcher Affiliation Academia Dustin Tran Harvard University dtran@g.harvard.edu Rajesh Ranganath Princeton University rajeshr@cs.princeton.edu David M. Blei Columbia University david.blei@columbia.edu
Pseudocode Yes Algorithm 1: Black box inference with a variational Gaussian process
Open Source Code No The paper mentions using existing tools like Stan and Theano, but it does not state that its own code for the described methodology is open-source or provide a link to it.
Open Datasets Yes The binarized MNIST data set (Salakhutdinov & Murray, 2008) consists of 28x28 pixel images with binary-valued outcomes.
Dataset Splits No The paper mentions a split for the Sketch dataset ('We partition it into 18,000 training examples and 2,000 test examples') but does not specify a separate validation split or detail a cross-validation strategy.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running its experiments.
Software Dependencies No The paper mentions using 'Stan and Theano' for differentiation tools but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For the learning rate we apply a version of RMSProp (Tieleman & Hinton, 2012), in which we scale the value with a decaying schedule 1/t1/2+ϵ for ϵ > 0. We fix the size of variational data to be 500 across all experiments and set the latent input dimension equal to the number of latent variables.