A General Framework for User-Guided Bayesian Optimization

Authors: Carl Hvarfner, Frank Hutter, Luigi Nardi

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate Cola BO s ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
Researcher Affiliation Collaboration Carl Hvarfner Lund University carl.hvarfner@cs.lth.se Frank Hutter University of Freiburg fh@cs.uni-freiburg.de Luigi Nardi DBtune, Lund University, Stanford University luigi.nardi@cs.lth.se
Pseudocode Yes Algorithm 1 Cola BO iteration
Open Source Code Yes The experimental setup is outlined in Appendix B, and our code is publicly available at https://github.com/hvarfner/colabo.
Open Datasets Yes We evaluate the performance of Cola BO on various tasks, using priors over the optimum πx obtained from known optima on synthetic tasks, as well as from prior work (Mallik et al., 2023) on realistic tasks. [...] We evaluate Cola BO on three 4D deep learning HPO tasks from the PD1 (Wang et al., 2023) benchmarking suite. [...] We evaluate all methods on five deep learning tasks (6D) from the LCBench (Zimmer et al., 2020) suite, utilizing priors from MF-Prior-Bench.
Dataset Splits No The paper mentions “train”, “validation”, and “test” as parts of the BO process, but it does not specify explicit percentages, sample counts, or specific predefined splits for any of the datasets or benchmarks used in the experiments.
Hardware Specification No The paper mentions: “The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973.” This indicates a computing resource but lacks specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper states: “All acquisition functions are implemented in Bo Torch (Balandat et al., 2020) using a squared exponential kernel and MAP hyperparameter estimation.” While Bo Torch is mentioned, a specific version number for it or any other software dependency is not provided.
Experiment Setup Yes All hyperparameters lengthscale, outputscale and observation noise (θ tℓ, σ2 ε, σ2 fuq are given conventional LNp0, 1q prior, applied on normalized inputs and standardized outputs. Furthermore, we fit the constant c of the mean function, assigning it a Np0, 1q prior as well. In Tab. 1, we display the parameters of the MC approximations for various tasks.