Increasing the Scope as You Learn: Adaptive Bayesian Optimization in Nested Subspaces

Authors: Leonard Papenmeier, Luigi Nardi, Matthias Poloczek

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A comprehensive evaluation demonstrates that BAXUS achieves better results than the state-of-the-art methods for a broad set of applications.
Researcher Affiliation Collaboration Leonard Papenmeier Lund University leonard.papenmeier@cs.lth.se Luigi Nardi Lund University, Stanford University, DBtune luigi.nardi@cs.lth.se Matthias Poloczek Amazon San Francisco, CA 94105, USA matpol@amazon.com
Pseudocode Yes Algorithm 1 BAXUS
Open Source Code Yes The BAXUS code is available at https://github.com/Leo IV/BAx US.
Open Datasets Yes We evaluate the selected algorithms on six benchmarks that differ considerably in their characteristics. Following [71], we augment the BRANIN2 and HARTMANN6 functions with additional dummy dimensions that have no influence on the function value. We use the 388D SVM benchmark and the 124D soft-constraint version of the MOPTA08 benchmark proposed in [20]. ... We also tested the algorithms on the 300D LASSO-HIGH and the 1000D LASSO-HARD benchmarks from LASSOBENCH [59].
Dataset Splits No The paper mentions "We initialize each optimizer, including BAXUS, with ten initial samples" but does not explicitly state dataset splits for training, validation, or testing using percentages, counts, or references to predefined splits.
Hardware Specification No The paper states "The available hardware allowed up to 100 evaluations for SAASBO and 500 evaluations for ALEBO" and refers to Appendix E for compute resources (which is not provided in the text), but it does not specify any exact hardware details (e.g., specific GPU/CPU models, memory amounts) in the provided text.
Software Dependencies No For CMA-ES, we use the PYCMA [27] implementation. For HESBO and ALEBO, we use the AX implementation [1]. While implementations are mentioned, specific version numbers for these software components are not provided in the text.
Experiment Setup Yes We initialize each optimizer, including BAXUS, with ten initial samples and BAXUS with b = 3 and m D = 1000 and run 20 repeated trials.