Efficient Robust Bayesian Optimization for Arbitrary Uncertain inputs
Authors: Lin Yang, Junlong Lyu, Wenlong Lyu, Zhitang Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive evaluations on synthetic functions and real problems in Sec.5 demonstrate that our algorithm can efficiently identify robust optimum under complex input uncertainty and achieve a state-of-the-art performance. and 5 Evaluation In this section, we first experimentally demonstrate AIRBO s ability to model uncertain inputs of arbitrary distributions, then validate the Nyström-based inference acceleration for GP posterior, followed by experiments on robust optimization of synthetic functions and real-world benchmark. |
| Researcher Affiliation | Industry | Lin Yang Huawei Noah s Ark Lab China yanglin33@huawei.com Junlong Lyu Huawei Noah s Ark Lab Hong Kong SAR, China lyujunlong@huawei.com Wenlong Lyu Huawei Noah s Ark Lab China lvwenlong2@huawei.com Zhitang Chen Huawei Noah s Ark Lab Hong Kong SAR, China chenzhitang2@huawei.com |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found. The methods are described in narrative text. |
| Open Source Code | Yes | 1The code will be available on https://github.com/huawei-noah/HEBO, and more implementation details can be found in Appendix C.1. |
| Open Datasets | Yes | Comprehensive evaluations on synthetic functions and real problems in Sec.5 demonstrate that our algorithm can efficiently identify robust optimum under complex input uncertainty and achieve a state-of-the-art performance. and To evaluate AIRBO in a real-world problem, we employ a robust robot pushing benchmark from [31] |
| Dataset Splits | No | The paper refers to 'training datasets' (e.g., 'produce training datasets of D = {(xi, f(xi + δi))|δi Pxi}10 i=1') but does not provide specific training/validation/test split percentages or sample counts for any dataset used. |
| Hardware Specification | No | The paper mentions 'GPU memory' and 'parallel computation' but does not provide specific hardware details such as GPU/CPU models, memory specifications, or types of computing instances used for experiments. |
| Software Dependencies | No | In our implementation of AIRBO, we design the kernel k used for MMD estimation to be a linear combination of multiple Rational Quadratic kernels as its long tail behavior circumvents the fast decay issue of kernel [6]. We implement our algorithm 1 based on Bo Torch [2] and employ a linear combination of multiple rational quadratic kernels [6] to compute the MMD as Eq. 9. No version numbers for Bo Torch or other software. |
| Experiment Setup | Yes | In our implementation of AIRBO... we employ a classic UCB-based acquisition as Eq. 5 with β = 2.0 and maximize it via an L-BFGS-B optimizer. |