Spectral Representation of Robustness Measures for Optimization Under Input Uncertainty

Authors: Jixiang Qing, Tom Dhaene, Ivo Couckuyt

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct multiple experiments to investigate the accuracy of the spectral representation of the robustness measures. We also benchmark their performance for RBO, using the three proposed acquisition functions. The code is implemented using the open-source library Trieste (Berkeley et al., 2021). We conduct RBO on common synthetic benchmark functions and report the normal input uncertainty results in Fig. 3, as well as the uniform input uncertainty results in appendix H.5.
Researcher Affiliation Academia 1Ghent University, imec, IDLab, Department of Information Technology (INTEC), Tech Lane Zwijnaarde 126, 9052 Ghent, Belgium. Correspondence to: Jixiang Qing <Jixiang.Qing@UGent.be>.
Pseudocode Yes Algorithm 1 Sampling Robustness Measure s Trajectories
Open Source Code Yes Our code is available at https://github.com/TsingQAQ/gp_mean_var_rbo.
Open Datasets Yes We conduct RBO on common synthetic benchmark functions and real-life problems. For each problem, 5d initial randomly generated data are used.
Dataset Splits No A data set of 10d samples is drawn from a GP prior based on a SE kernel. Afterwards, a GP is constructed and we compare the robustness measure distributions (based on RFF and QFF) at different input locations with an exhaustive Monte Carlo approach, which is regarded as the ground-truth.
Hardware Specification No The paper does not provide specific details on the hardware used for running the experiments.
Software Dependencies No The code is implemented using the open-source library Trieste (Berkeley et al., 2021)
Experiment Setup Yes Number of Fourier features Since Fourier feature-based acquisition functions need to specify the number of features explicitly, we use {128, 900, 1000} for {1d, 2d, 3d} problems, respectively, for both RFF and QFF. For each problem, 5d initial randomly generated data are used. For the GP constructing, the SE ARD kernel is used with a log-normal prior on lengthscales, where the kernel hyperparameter is inferred using maximum a posteriori estimation. Each experiment is repeated 30 times.