High-Dimensional Bayesian Optimization via Tree-Structured Additive Models
Authors: Eric Han, Ishank Arora, Jonathan Scarlett7630-7638
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate and discuss the efficacy of our approach via a range of experiments on synthetic functions and real-world datasets. |
| Researcher Affiliation | Academia | Eric Han,1 Ishank Arora,2 Jonathan Scarlett1,3 1School of Computing, National University of Singapore 2Indian Institute of Technology (BHU) Varanasi 3Department of Mathematics & Institute of Data Science, National University of Singapore |
| Pseudocode | Yes | Algorithm 1: TREE-GP-UCB; Algorithm 2: MSG-PASSING-CONTINUOUS; Algorithm 3: TREE-LEARNING; Algorithm 4: GIBBS-SAMPLING; Algorithm 5: MUTATION |
| Open Source Code | Yes | 1The code is available at https://github.com/eric-vader/HD-BO-Additive-Models. |
| Open Datasets | Yes | Additional experiments on the NAS-Bench-101 (NAS) dataset (Ying et al. 2019; Klein and Hutter 2019) and BO-based adversarial attacks (BA) (Ru et al. 2020) can be found in the appendix. |
| Dataset Splits | No | The paper describes a sequential Bayesian Optimization process, which involves iterative data acquisition, rather than traditional fixed dataset splits (training, validation, test sets) with specific percentages or counts. The initial points and total iterations are specified, but not in the context of dataset partitioning for reproducibility in the standard sense. |
| Hardware Specification | No | The paper mentions "AWS Cloud Credits for Research program" in the acknowledgments, indicating the use of AWS. However, it does not specify any particular hardware details such as GPU models, CPU models, or instance types used for the experiments. |
| Software Dependencies | Yes | We consider tuning the parameters of lpsolve, an open-source Mixed Integer Linear Programming (MILP) solver (Berkelaar, Eikland, and Notebaert 2004). |
| Experiment Setup | Yes | We ran all experiments with Ninit 10 initial points and Niter 1000 total points. (...) We initialize the dimensional lengthscale and scale parameters as σi 0.5, and li 0.1 for all i. We set η 0.1 in (3) to account for noisy observations. (...) We sample the structure for S 250 times every C 15 iterations. (...) We set the trade-off parameter in UCB to be β ptq 0.5 log p2tq, as suggested in (Rolland et al. 2018). For discrete experiments, we discretize each dimension to 50 levels, with the maximum number of individual acquisition function evaluations capped at 1000. For continuous experiments, we let each level s grid size be R 4 and the number of levels be L 4 (see Fig. 2) with no maximum evaluation limits. |