Incremental Variational Sparse Gaussian Process Regression

Authors: Ching-An Cheng, Byron Boots

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct several experiments and show that our proposed approach achieves better empirical performance in terms of prediction error than the recent state-of-the-art incremental solutions to variational sparse GPR.
Researcher Affiliation Academia Ching-An Cheng Institute for Robotics and Intelligent Machines Georgia Institute of Technology Atlanta, GA 30332 cacheng@gatech.edu Byron Boots Institute for Robotics and Intelligent Machines Georgia Institute of Technology Atlanta, GA 30332 bboots@cc.gatech.edu
Pseudocode No The paper describes the algorithm steps mathematically and textually but does not include a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not provide any explicit statements about open-sourcing the code for the described methodology, nor does it include links to a code repository.
Open Datasets Yes We performed experiments on three real-world robotic datasets datasets, kin40k4, SARCOS5, KUKA6... kin40k: 10000 training data, 30000 testing data, 8 attributes [23]. SARCOS: 44484 training data, 4449 testing data, 28 attributes. http://www.gaussianprocess.org/gpml/data/. KUKA1&KUKA2: 17560 offline data, 180360 online data, 28 attributes. [15]
Dataset Splits No The paper mentions '10000 training data, 30000 testing data' for kin40k, '44484 training data, 4449 testing data' for SARCOS, and 'split 90% into training and 10% into testing datasets' for KUKA, but no explicit mention of a separate validation set or its split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes All models were initialized with the same hyperparameters and inducing points: the hyperparameters were selected as the optimal ones in the batch variational sparse GPR [26] trained on subset of the training dataset of size 2048; the inducing points were initialized as random samples from the first minibatch. We chose the learning rate to be γt = (1 + t) 1, for stochastic mirror ascent to update the posterior approximation; the learning rate for the stochastic gradient ascent to update the hyperparameters is set to 10 4γt . We evaluate the models in terms of the normalized mean squared error (n MSE) on a held-out test set after 500 iterations. we set the number inducing functions to 512. Nm = 2048.