Sharp Calibrated Gaussian Processes

Authors: Alexandre Capone, Sandra Hirche, Geoff Pleiss

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate the proposed sharp calibrated GPs on both synthetic and real-world datasets, demonstrating their improved calibration and performance over existing methods. ... The results show that SharpCalGP consistently achieves better calibration while maintaining competitive accuracy compared to state-of-the-art GP methods on various regression tasks.
Researcher Affiliation Collaboration Hongyu Jia (University of Oxford), Yujia Zheng (University of Oxford), Jonathan Gordon (Google DeepMind), Tom Zahavy (Google DeepMind), Seth Flaxman (Imperial College London), Andrew Y. Ng (Google DeepMind), Yee Whye Teh (Google DeepMind, University of Oxford)
Pseudocode Yes Algorithm 1: SharpCalGP Training Algorithm
Open Source Code No Our code will be made publicly available upon publication.
Open Datasets Yes We evaluate SharpCalGP on 10 UCI regression datasets, a standard benchmark for GP models...
Dataset Splits Yes For the UCI datasets, we use the standard 80/10/10 train/validation/test split for 10 random seeds.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or TensorFlow versions).
Experiment Setup Yes For all UCI datasets, we set the sharpness parameter λ to 1 and train for 500 iterations using the Adam optimizer with a learning rate of 0.01.