Constant-Time Predictive Distributions for Gaussian Processes
Authors: Geoff Pleiss, Jacob Gardner, Kilian Weinberger, Andrew Gordon Wilson
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, LOVE computes covariances up to 2,000 times faster and draws samples 18,000 times faster than existing methods, all without sacrificing accuracy. We empirically validate LOVE on seven datasets and find that it consistently provides substantial speedups over existing methods without sacrificing accuracy. Variances and samples are accurate to within four decimals, and can be computed up to 18,000 times faster. |
| Researcher Affiliation | Academia | 1Cornell University. Correspondence to: Geoff Pleiss <geoff@cs.cornell.edu>, Jacob R. Gardner <jrg365@cornell.edu>, Andrew Gordon Wilson <andrew@cornell.edu>. |
| Pseudocode | Yes | Algorithm 1: LOVE for fast predictive variances. |
| Open Source Code | Yes | LOVE is implemented in the GPy Torch library. Examples are available at http://bit.ly/gpytorch-examples. |
| Open Datasets | Yes | We test on the two largest UCI datasets which can still be solved exactly (Pol Tele, Eleveators) and two Bayesian optimization benchmark functions (Eggholder 2 dimensional, and Styblinski-Tang 10 dimensional). The airline passenger dataset (Airline) measures the average monthly number of passengers from 1949 to 1961 (Hyndman, 2005). |
| Dataset Splits | Yes | We aim to extrapolate the numbers for the final 4 years (48 measurements) given data for the first 8 years (96 measurements). |
| Hardware Specification | Yes | All timing experiments utilize GPU acceleration, performed on a NVIDIA GTX 1070. |
| Software Dependencies | No | The paper mentions "GPy Torch library" and "ADAM (Kingma & Ba, 2015)" but does not specify version numbers for these software components or any other libraries. |
| Experiment Setup | Yes | We optimize models with ADAM (Kingma & Ba, 2015) and a learning rate of 0.1. |