Sparse Gaussian Processes with Spherical Harmonic Features
Authors: Vincent Dutordoir, Nicolas Durrande, James Hensman
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that our model is able to fit a regression model for a dataset with 6 million entries two orders of magnitude faster compared to standard sparse GPs, while retaining state of the art accuracy. We also demonstrate competitive performance on classification with non-conjugate likelihoods. ... Section 4 is dedicated to the experimental evaluation. |
| Researcher Affiliation | Collaboration | 1PROWLER.io, Cambridge, United Kingdom 2Department of Engineering, University of Cambridge, Cambridge, United Kingdom 3Amazon Research, Cambridge, United Kingdom (work done while JH was affiliated to PROWLER.io). |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm', nor any structured algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the source code for the work described, nor a direct link to a repository containing their implementation code. |
| Open Datasets | Yes | We use five UCI regression datasets to compare the performance of our method against other GP approaches. ... We use the 2008 U.S. airline delay dataset to asses these capabilities. |
| Dataset Splits | Yes | For each dataset we randomly select 90% of the data for training and 10% for testing and repeat this 5 times to get error bars. ... Every split is repeated 10 times and we report the mean and one standard deviation of the MSE and NLPD. |
| Hardware Specification | Yes | All these experiments were ran on a single consumer-grade GPU (Nvidia GTX 1070). |
| Software Dependencies | No | The paper mentions optimizers like L-BFGS and Adam, but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | For VISH we normalize the inputs so that each column falls within [ vd, vd]. ... For SVGP and VISH we first used a subset of 20,000 points to train the variational and hyper-parameters of the model with L-BFGS. We then applied Adam to the whole dataset. |