Rates of Convergence for Sparse Variational Gaussian Process Regression

Authors: David Burt, Carl Edward Rasmussen, Mark Van Der Wilk

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1. Increasing N with fixed M increases the expected KL divergence. t/2σ2 n is a lower bound for the expected value over the KL divergence when y is generated according to our prior model. Figure 3. Rates of convergence as M increases on fixed dataset of size N = 1000, with a SE-kernel with ℓ= .6, v = 1, σn = 1 and x N(0, 1) and y sampled from the prior. Figure 4. We increase N and take M = C log(N) for a onedimensional SE-kernel and normally distributed inputs. The KL divergence decays rapidly, as predicted by Corollary 2.
Researcher Affiliation Collaboration 1University of Cambridge, Cambridge, UK 2PROWLER.io, Cambridge, UK.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions "Open source implementations of approximate k-DPPs are available (e.g. [Gautier et al., 2018])." and provides a link to DPPy in the acknowledgements, which is a tool for k-DPPs. However, it does not state that the code for the specific methodology described in *this* paper is open-source or provide a link to it.
Open Datasets No Figure 3. Rates of convergence as M increases on fixed dataset of size N = 1000, with a SE-kernel with ℓ= .6, v = 1, σn = 1 and x N(0, 1) and y sampled from the prior. The paper uses synthetic data generated according to a specified distribution, not a publicly available dataset.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits. It discusses theoretical bounds and illustrates them with simulated data rather than evaluating on partitioned empirical datasets.
Hardware Specification No The paper does not describe any specific hardware used for running its experiments or simulations.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in its implementation or experiments.
Experiment Setup Yes Figure 3. Rates of convergence as M increases on fixed dataset of size N = 1000, with a SE-kernel with ℓ= .6, v = 1, σn = 1 and x N(0, 1) and y sampled from the prior. Figure 4. We increase N and take M = C log(N) for a onedimensional SE-kernel and normally distributed inputs.