Posterior and Computational Uncertainty in Gaussian Processes
Authors: Jonathan Wenger, Geoff Pleiss, Marvin Pförtner, Philipp Hennig, John P. Cunningham
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we empirically demonstrate the consequences of ignoring computational uncertainty and show how implicitly modeling it improves generalization performance on benchmark datasets. |
| Researcher Affiliation | Academia | 1 University of Tübingen 2 Columbia University 3 Max Planck Institute for Intelligent Systems, Tübingen |
| Pseudocode | Yes | Algorithm 1: A Class of Computation-Aware Iterative Methods for GP Approximation |
| Open Source Code | Yes | An implementation of Algorithm 1, based on Ke Ops [48] and Prob Num [60], is available at: https://github.com/JonathanWenger/itergp |
| Open Datasets | Yes | as well as a range of UCI datasets [61] with training set sizes n = 5, 287 to 57, 247, dimensions d = 9 to 26 and standardized features. |
| Dataset Splits | Yes | All experiments were run 10 times with randomly sampled training and test splits of 90/10 and we report average metrics with 95% confidence intervals. |
| Hardware Specification | Yes | All experiments were run on an NVIDIA GeForce RTX 2080 Ti graphics card. |
| Software Dependencies | No | An implementation of Algorithm 1, based on Ke Ops [48] and Prob Num [60], is available at: https://github.com/JonathanWenger/itergp. While specific software names are mentioned, their version numbers are not provided. |
| Experiment Setup | No | The paper states that hyperparameters are selected using a specific training procedure and that a zero mean prior and Matérn(1/2) kernel are used. However, it does not provide specific numerical values for hyperparameters such as learning rate, batch size, or optimizer settings within the main text. |