Are Random Decompositions all we need in High Dimensional Bayesian Optimisation?

Authors: Juliusz Krzysztof Ziomek, Haitham Bou Ammar

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We find that data-driven learners of decompositions can be easily misled towards local decompositions that do not hold globally across the search space. Then, we formally show that a random tree-based decomposition sampler exhibits favourable theoretical guarantees that effectively trade off maximal information gain and functional mismatch between the actual black-box and its surrogate as provided by the decomposition. Those results motivate the development of the random decomposition upper-confidence bound algorithm (RDUCB) that is straightforward to implement (almost) plug-and-play and, surprisingly, yields significant empirical gains compared to the previous state-of-the-art on a comprehensive set of benchmarks. We also confirm the plug-and-play nature of our modelling component by integrating our method with HEBO (Cowen-Rivers et al., 2022), showing improved practical gains in the highest dimensional tasks from the Bayesmark problem suite.
Researcher Affiliation Industry 1Huawei Noah s Ark Lab, London, UK. Correspondence to: Haitham Bou-Ammar <haitham [dot] ammmar (at) huawei {dot} com>.
Pseudocode Yes Algorithm 1 RDUCB; Algorithm 2 Random Tree Sampler
Open Source Code Yes We have open-sourced our code4 to ease the reproducibility of our results. 4https://github.com/huawei-noah/HEBO/ tree/master/RDUCB
Open Datasets Yes Synthethic Functions: We test our method on 20-dimensional Rosebrock, 20-dimensional Hartmann, and 250-dimensional Styblinski-Tang (Stybtang) functions. ... Neural Network Hyperparameter Tuning: ...NAS hyperparameter tuning benchmark (Zela et al., 2020). ... Mixed Integer Programming: ...tuning heuristic hyperparameters for the mixed integer programming (MIP) solver LPSolve (Berkelaar et al., 2015). ... Weighted Lasso Tuning: ...Lasso Bench (ˇSehi c et al., 2022).
Dataset Splits No The paper mentions initial points and running experiments for a certain number of iterations, but it does not specify explicit training/validation/test splits using percentages, counts, or specific predefined split names for the various datasets used.
Hardware Specification Yes All experiments were run on machines with specifications described in Table 3. Component Description CPU Intel Core i9-9900X CPU @ 3.50GHz GPU Nvidia RTX 2080 Memory 64 GB DDR4
Software Dependencies No The paper mentions using specific algorithms and frameworks (e.g., HEBO, LPSolve) and notes in Appendix C that it provides 'all algorithm settings', but it does not list specific software dependencies with version numbers like 'Python 3.8' or 'PyTorch 1.9'.
Experiment Setup Yes In Appendix C, we provide all algorithm settings used in our experiments. ... Table 2: Tree Acquisition function Additive UCB with βt = 0.5 log(2t) Decomposition learning interval 15 Gibbs sampling iterations 100. RDUCB Acquisition function Additive UCB with βt = 0.5 log(2t) Size of random tree max{ d/5 , 1}.