Variationally Auto-Encoded Deep Gaussian Processes
Authors: Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence
ICLR 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the efficacy of the method on a variety of challenges including deep unsupervised learning and deep Bayesian optimization. and 5 EXPERIMENTS As a probabilistic generative model, VAE-DGP is applicable to a range of different tasks such as data generation, data imputation, etc. In this section we evaluate our model in a variety of problems and compare it with the alternatives in the in the literature. |
| Researcher Affiliation | Academia | Zhenwen Dai, Andreas Damianou, Javier Gonz alez & Neil Lawrence Department of Computer Science, University of Sheffield, UK {z.dai, andreas.damianou, j.h.gonzalez, n.lawrence}@sheffield.ac.uk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described (no specific repository link, explicit code release statement, or mention of code in supplementary materials). |
| Open Datasets | Yes | We train the VAE-DGP on MNIST (Le Cun et al., 1998). and We first apply to our model to the combination of Frey faces and Yale faces (Frey-Yale). and We also apply VAE-DGP to the street view house number dataset (SVHN) (Netzer et al., 2011). |
| Dataset Splits | No | The paper specifies training and test set sizes (e.g., 'the whole training set for learning, which consists of 60,000 28 28 images' and 'the test set3, which consists of 10,000 images' for MNIST), but does not explicitly provide details about a validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components and algorithms like 'L-BFGS' and 'multi-layer perceptron (MLP)', but it does not provide specific version numbers for any software dependencies or libraries needed to replicate the experiments. |
| Experiment Setup | Yes | The applied VAE-DGP has two hidden layers (a 2D top hidden layer and a 20D middle hidden layer). The exponentiated quadratic kernel is used for all the layers with 100 inducing points. All the MLPs in the recognition model have two hidden layers with widths (500-300). and The exponentiated quadratic kernel are used for all the layers with 300 inducing points. All the MLPs in the recognition model has two hidden layers with width (500-300). and We use three hidden layers with the dimensionality of latent space from top to bottom (5-30-500). The top two hidden layers use the exponentiated quadratic kernel and the observed layer uses the linear kernel with 500 inducing points. |