How Good Are Low-Rank Approximations in Gaussian Process Regression?

Authors: Constantinos Daskalakis, Petros Dellaportas, Aristeidis Panos6463-6470

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.
Researcher Affiliation Academia 1CSAIL, Massachusetts Institute of Technology, USA 2University College London, UK 3University of Warwick, UK 4Athens University of Economics and Business, Greece 5The Alan Turing Institute, UK
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any explicit statements about the release of source code or links to a code repository for the methodology described.
Open Datasets Yes We conduct thorough experiments, testing the quality of FGP and MGP over seven datasets from UCI repository (Dua and Graff 2017).
Dataset Splits No The paper states 'All results have been averaged over five random splits (90% train, 10% test)' but does not explicitly mention a separate validation set split for hyperparameter tuning or early stopping.
Hardware Specification No The paper mentions 'setting up the Linux machine used for the experiments' but provides no specific details regarding GPU models, CPU models, or other hardware specifications.
Software Dependencies No The paper mentions 'Adam (Kingma and Ba 2014)' as an optimizer but does not specify version numbers for any software dependencies like programming languages, libraries (e.g., PyTorch, TensorFlow), or other tools.
Experiment Setup Yes We train all methods for 300 epochs using Adam (Kingma and Ba 2014). All GPs use Gaussian kernels with separate length-scale per dimension. For MGP, the projection dimension d is determined by cross-validation on training data, with its value ranging in 3 d 7 across all seven datasets.