Efficient Distributionally Robust Bayesian Optimization with Worst-case Sensitivity

Authors: Sebastian Shenghong Tay, Chuan Sheng Foo, Urano Daisuke, Richalynn Leong, Bryan Kian Hsiang Low

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide empirical results to show that our algorithm utilizing the fast approximation scales significantly better in the context set size |C| and yet performs comparably to that using the exact worst-case expected value, while outperforming non-robust ones (Sec. 7).
Researcher Affiliation Collaboration 1Department of Computer Science, National University of Singapore, Singapore 2Institute for Infocomm Research, A*STAR, Singapore 3Temasek Life Sciences Laboratory, Singapore.
Pseudocode Yes Algorithm 1 Generalized DRBO (Kirschner et al., 2020)
Open Source Code Yes The code is available at https://github.com/sebtsh/fast-drbo.
Open Datasets Yes We use wind power data from the Open Power System Data project (Wiese et al., 2019).
Dataset Splits No The paper describes data generation and distribution parameters for synthetic data and objective functions built from real-world data, but does not specify explicit train/validation/test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions software packages like Num Py, GPflow, and CVXPY, and notes that ECOS and SCS solvers were used, but it does not specify exact version numbers for these software dependencies in the text.
Experiment Setup Yes In all experiments (except Computation Time), βt = 2 t [T], σ2 = 0.001. The modelling GP uses an ARD squared exponential kernel with lengthscale 0.1 in each dimension and the ground-truth observational variance. The objective function values were normalized by subtracting the data mean and dividing by the data standard deviation.