Parametric Gaussian Process Regressors

Authors: Martin Jankowiak, Geoff Pleiss, Jacob Gardner

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we compare the empirical performance of the approaches to scalable GP regression introduced in Sec. 3 to the baseline inference strategies described in Sec. 2.3. All our models use a prior mean of zero and a Matérn kernel with independent length scales for each input dimension. We consider a mix of univariate regression datasets from the UCI repository (Dua and Graff, 2017), with the number of datapoints ranging from N ∼ 10^4 to N ∼ 10^6 and the number of input dimensions in the range dim(x) ∈ [3, 380].
Researcher Affiliation Collaboration 1The Broad Institute, Cambridge, MA, USA 2Dept. of Computer Science, Cornell University, Ithaca, NY, USA 3Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA . Correspondence to: Martin Jankowiak <jankowiak@gmail.com>. ... This work was completed while MJ and JG were at Uber AI.
Pseudocode Yes Algorithm 1: Scalable GP Regression. All of the inference algorithms we consider follow the same basic pattern and only differ in the form of the objective function, e.g. Lsvgp (Eqn. 6), Lvfitc (Eqn. 15) or Lppgpr (Eqn. 18). Similarly for all methods the predictive distribution is given by Eqn. 11. See Sec. B in the supplementary materials for a discussion of the time and space complexity of each method.
Open Source Code Yes For an implementation of our method in GPy Torch see https://git.io/JJy9b.
Open Datasets Yes We consider a mix of univariate regression datasets from the UCI repository (Dua and Graff, 2017)
Dataset Splits Yes Results are averaged over ten random train/test/validation splits. (Figure 2 caption)
Hardware Specification No The paper mentions 'GPU acceleration' and 'Uber compute infrastructure' but does not provide specific details on CPU or GPU models, or other hardware specifications used for experiments.
Software Dependencies No The paper mentions 'GPy Torch' but does not specify its version or the versions of any other software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes All our models use a prior mean of zero and a Matérn kernel with independent length scales for each input dimension. (Sec 5.1) ... When βreg = 1 the form of the objective in Eqn. 17... (Sec 3.2) ... Finally we note that for most datasets PPGPR prefers small values of βreg. (Sec 5.1) ... MCDropout, which requires forwarding data points through many sampled models (here 50). (Sec 5.3)