Batched Energy-Entropy acquisition for Bayesian Optimization

Authors: Felix Teufel, Carsten Stahlhut, Jesper Ferkinghoff-Borg

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the applicability of BEEBO on a range of problems, showing competitive performance to existing methods.
Researcher Affiliation Collaboration Felix Teufel12 Carsten Stahlhut1 Jesper Ferkinghoff-Borg1 1Machine Intelligence, Novo Nordisk A/S 2Department of Biology, University of Copenhagen
Pseudocode Yes Algorithm 1: mean BEEBO optimization
Open Source Code Yes Availability A Bo Torch implementation of BEEBO is available at https://github.com/novonordisk-research/BEE-BO.
Open Datasets Yes Test problems We benchmark acquisition function performance on a range of maximization test problems with varying dimensions (Table 1) available in Bo Torch [52].We additionally test on two robot control problems (robot arm pushing and rover trajectory planning) in Appendix D.3 [55, 56].
Dataset Splits No The paper describes the initial seeding of the GP surrogate and sequential acquisition of points, but it does not specify traditional train/validation/test dataset splits as commonly found in supervised learning setups.
Hardware Specification Yes Experiments were run on individual Nvidia RTX 6000 and V100 GPUs.
Software Dependencies Yes BEEBO was implemented for full compatibility with the Bo Torch framework (version 0.9.4) [52]... We trained GPy Torch (version 1.11) [57] GP models with Ke Ops [64] Matérn 5/2 kernels
Experiment Setup Yes On each test problem, we perform 10 rounds of BO... We use Q = 100 for all experiments... we set the explore hyperparameter to 0 in the last round... All experiments were performed with double precision.For TS, 10.000 base Sobol samples were drawn and sampled with Max Posterior Sampling using the Cholesky decomposition of the covariance matrix. GIBBON was optimized using sequential optimization following the Bo Torch tutorial... We used 100,000 random discretized candidates for max-value sampling.