Learning Gaussian Processes by Minimizing PAC-Bayesian Generalization Bounds
Authors: David Reeb, Andreas Doerr, Sebastian Gerwinn, Barbara Rakitsch
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We find in our evaluation that our learning method is robust and yields significantly better generalization guarantees than other common GP approaches on several regression benchmark datasets. We evaluate our learning method on several datasets and compare its performance to state-of-the-art GP methods [1, 15, 6] in Sect. 4. |
| Researcher Affiliation | Industry | David Reeb Andreas Doerr Sebastian Gerwinn Barbara Rakitsch Bosch Center for Artificial Intelligence Robert-Bosch-Campus 1 71272 Renningen, Germany {david.reeb,andreas.doerr3,sebastian.gerwinn,barbara.rakitsch}@de.bosch.com |
| Pseudocode | No | The paper describes methods in prose and mathematical formulations but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Python code (building on GPflow [27] and Tensor Flow [28]) implementing our method is available at https://github.com/boschresearch/PAC_GP. |
| Open Datasets | Yes | boston housing dataset7. (footnote 7: http://lib.stat.cmu.edu/datasets/boston); pol, sarcos, and kin40k8 (footnote 8: https://github.com/trungngv/fgp.git; http://www.gaussianprocess.org/gpml/data) |
| Dataset Splits | No | The paper specifies an '80% of the dataset for training and 20% for testing' split, but does not explicitly mention a validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments. |
| Software Dependencies | No | The paper mentions 'GPflow [27] and Tensor Flow [28]' as software used, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | As pre-processing we normalized all features and the output to mean zero and unit variance, then analysed the impact of the accuracy goal ε {0.2, 0.4, 0.6, 0.8, 1.0}. we round each component of ln θ to two decimal digits in the range [ 6, +6], i.e. L = 6, G = 1200. |