Bounce: Reliable High-Dimensional Bayesian Optimization for Combinatorial and Mixed Spaces
Authors: Leonard Papenmeier, Luigi Nardi, Matthias Poloczek
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments show that Bounce reliably achieves and often even improves upon state-of-the-art performance on a variety of high-dimensional problems. and 4 Experimental evaluation We evaluate Bounce empirically on various benchmarks whose inputs are combinatorial, continuous, or mixed spaces. |
| Researcher Affiliation | Collaboration | Leonard Papenmeier Lund University leonard.papenmeier@cs.lth.se Luigi Nardi Lund University, Stanford University, DBtune luigi.nardi@cs.lth.se Matthias Poloczek Amazon San Francisco, CA 94105, USA matpol@amazon.com |
| Pseudocode | Yes | Algorithm 1 gives a high-level overview of Bounce. |
| Open Source Code | Yes | Therefore, we open-source the Bounce code.1 https://github.com/LeoIV/bounce |
| Open Datasets | Yes | The evaluation uses seven established benchmarks [21]: 53D SVM, 50D LABS, 125D Cluster Expansion [3, 4], 60D Max SAT60 [21, 56], 25D Pest Control, 53D Ackley53, and 25D Contamination [6, 39, 56]. |
| Dataset Splits | No | The paper states 'We initialize every algorithm with five initial points.' but does not provide specific details on train/validation/test dataset splits, such as percentages, sample counts, or citations to predefined splits for its experiments. |
| Hardware Specification | Yes | Due to its high-memory footprint, we ran BODi on NVidia A100 80GB GPUs for 300 GPU/h. We ran Bounce on NVidia A40 GPUs for 2,000 GPU/h. We ran the remaining methods for 20,000 GPU/h on one core of Intel Xeon Gold 6130 CPUs with 60GB of memory. |
| Software Dependencies | No | The paper states 'We implement Bounce in Python using the Bo Torch [5] and GPy Torch [29] libraries.' but does not specify the version numbers for Python, Bo Torch, or GPy Torch, which are necessary for full reproducibility. |
| Experiment Setup | Yes | Input: initial target dimensionality dinit, evaluation budget m, batch size B, evaluation budget to input dimensionality m D, # new bins added per dimension b, number design of experiment (DOE) points ninit and We initialize every algorithm with five initial points. and We run all methods for 200 function evaluations unless stated otherwise. and We use an initial trust region baselength of 40 for the combinatorial variables, and 0.8 for the continuous variables. |