Scalable Inference for Gaussian Process Models with Black-Box Likelihoods
Authors: Amir Dezfouli, Edwin V. Bonilla
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on small datasets for various problems including regression, classification, Log Gaussian Cox processes, and warped GPs show that our method can perform as well as the full method under high sparsity levels. On larger experiments using the MNIST and the SARCOS datasets we show that our method can provide superior performance to previously published scalable approaches that have been handcrafted to specific likelihood models. |
| Researcher Affiliation | Academia | Amir Dezfouli The University of New South Wales akdezfuli@gmail.com Edwin V. Bonilla The University of New South Wales e.bonsw.edu.au |
| Pseudocode | No | The paper describes mathematical formulations and derivations but does not include any clearly labeled pseudocode or algorithm blocks. The inference steps are described in prose and equations. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | Yes | Our experiments first consider the same six benchmarks with various likelihood models analyzed by [6]. The number of training points (N) on these benchmarks ranges from 300 to 1233 and their input dimensionality (D) ranges from 1 to 256. We also carried out experiments at a larger scale using the MNIST dataset and the SARCOS dataset [16]. |
| Dataset Splits | Yes | This dataset [MNIST] has been extensively used by the machine learning community and contains 50,000 examples for training, 10,000 for validation and 10,000 for testing, with 784-dimensional input vectors. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to conduct the experiments, such as CPU models, GPU models, or memory specifications. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers with their versions) that would be necessary to reproduce the experiments. |
| Experiment Setup | No | The paper mentions 'We refer the reader to the supplementary material for the details of our experimental set-up.' (Section 6, Experiments) and discusses general settings like sparsity factors and types of approximate posteriors (FG, MoG1, MoG2). However, it does not provide specific hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or detailed training configurations within the main text. |