MCMC for Variationally Sparse Gaussian Processes
Authors: James Hensman, Alexander G. Matthews, Maurizio Filippone, Zoubin Ghahramani
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Code to replicate each experiment in this paper is available at github.com/sparse MCMC. |
| Researcher Affiliation | Academia | James Hensman CHICAS, Lancaster University james.hensman@lancaster.ac.uk Alexander G. de G. Matthews University of Cambridge am554@cam.ac.uk Maurizio Filippone EURECOM maurizio.filippone@eurecom.fr Zoubin Ghahramani University of Cambridge zoubin@cam.ac.uk |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code to replicate each experiment in this paper is available at github.com/sparse MCMC. |
| Open Datasets | Yes | We first use the image dataset [29]... coal-mining disaster data... pine sapling data [30]... MNIST The MNIST dataset is a well studied benchmark with a defined training/test split. |
| Dataset Splits | No | For the image dataset, 'The data were split randomly into 1000/1019 train/test sets'; for MNIST, 'The MNIST dataset is a well studied benchmark with a defined training/test split'. While test and training splits are mentioned, an explicit validation split (e.g., percentages or counts) is not provided in the main text. |
| Hardware Specification | No | The paper mentions running experiments 'on a desktop computer' or 'on a desktop machine' but does not provide specific hardware details such as CPU/GPU models, processor types, or memory specifications. |
| Software Dependencies | No | The paper mentions 'Cython implementation' but does not specify version numbers for Cython or any other key software dependencies required for replication. |
| Experiment Setup | Yes | We drew 10,000 samples, discarding the first 1000... ran our sampling scheme using HMC, drawing 3000 samples... ϵ was fixed to 0.001... We used 500 inducing points, initialized from the training data using k-means. |