Additive Gaussian Processes Revisited
Authors: Xiaoyu Lu, Alexis Boukouvalas, James Hensman
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted extensive sets of regression and classification experiments to show its practical value. The resulting model is parsimonious and interpretable, requiring minimal model tuning. Experimental results are given in Section 5 |
| Researcher Affiliation | Industry | 1Amazon, Cambridge, United Kingdom. Correspondence to: Xiaoyu Lu <luxiaoyu@amazon.com>. |
| Pseudocode | Yes | Algorithm 1 Newton-Girard method for computing the interacting kernel |
| Open Source Code | Yes | Our code is available at https://github.com/amzn/orthogonal-additive-gaussian-processes. |
| Open Datasets | Yes | We validate our model on a range of experiments, including a set of regression and classification problems from datasets used in Duvenaud et al. (2011) and additional UCI datasets, a large scale SUSY experiment and a Churn modelling problem. ...Data Auto MP Housing Concrete Pumadyn Breast Pima Sonar Ionosphere Liver Heart (Table 3) |
| Dataset Splits | Yes | We use five-fold cross-validation splits and compute test RMSE for regression and area-under-the-curve (Au C) errors for classification datasets. ...We use the same training-test split as in Dutordoir et al. (2020). |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware specifications (e.g., GPU models, CPU types, or memory amounts) used for running the experiments. |
| Software Dependencies | No | We plug the OAK kernel in the gpflow package... using gpflow.GPR (or gpflow.SGPR for larger datasets); for classification tasks, we use gpflow.SVGP... (all using Scikit-learn defaults). The paper mentions software packages used but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We place a Gamma prior on the variance hyperparameters of the kernel, which are estimated using Maximum a Posterior (MAP). After learning the hyperparameters, we compute the Sobol index for each term... Number of inducing points and mini batch size are set to be 800 and 1024 respectively. |