Distributed Gaussian Processes

Authors: Marc Deisenroth, Jun Wei Ng

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically assess three aspects of all distribute GP models: (1) The required training time, (2) the approximation quality, (3) a comparison with state-of-the-art sparse GP methods.
Researcher Affiliation Academia Marc Peter Deisenroth M.DEISENROTH@IMPERIAL.AC.UK Department of Computing, Imperial College London, United Kingdom Jun Wei Ng JUNWEI.NG10@ALUMPERIAL.AC.UK Department of Computing, Imperial College London, United Kingdom
Pseudocode No The paper describes methods and computational structures but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes The Kin40K data set consists of 10,000 training points and 30,000 test points. We use the same split into training and test data as Seeger et al. (2003), L azaro Gredilla et al. (2010), and Nguyen & Bonilla (2014). For the US Flight Data, a link to the dataset is provided: http://stat-computing.org/dataexpo/2009/
Dataset Splits Yes The Kin40K data set consists of 10,000 training points and 30,000 test points. We use the same split into training and test data as Seeger et al. (2003), L azaro Gredilla et al. (2010), and Nguyen & Bonilla (2014). For the US Flight Data, 'We selected the first P data points to train the model and the following 100,000 to test it.'
Hardware Specification Yes For the Kin40K experiment: 'a Virtual Machine with 16 3 GHz cores and 8 GB RAM.' For the US Flight Data: 'a workstation with 12 3.5 GHz cores and 32 GB RAM...' and 'All experiments can be conducted on a Mac Book Air (2012) with 8 GB RAM.'
Software Dependencies No The paper does not specify version numbers for any software components or libraries used.
Experiment Setup No The paper mentions using LBFGS for training and optimizing hyper-parameters but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.