BORE: Bayesian Optimization by Density-Ratio Estimation

Authors: Louis C Tiao, Aaron Klein, Matthias W Seeger, Edwin V. Bonilla, Cedric Archambeau, Fabio Ramos

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We describe the experiments conducted to empirically evaluate our method. To this end, we consider a variety of problems, ranging from automated machine learning (AUTOML), robotic arm control, to racing line optimization. We provide comparisons against a comprehensive selection of state-of-the-art baselines.
Researcher Affiliation Collaboration 1University of Sydney, Sydney, Australia 2CSIRO s Data61, Sydney, Australia 3Amazon, Berlin, Germany 4NVIDIA, Seattle, WA, USA.
Pseudocode Yes Algorithm 1: Bayesian optimization by densityratio estimation (BORE).
Open Source Code Yes Our open-source implementation is available at https://github.com/ltiao/bore.
Open Datasets Yes We consider four datasets: PROTEIN, NAVAL, PARKINSONS, and SLICE, and utilize HPOBench (Klein & Hutter, 2019)... We utilize NASBench201 (Dong & Yang, 2020), which tabulates precomputed results from all possible 56 = 15, 625 combinations for each of the three datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Image Net-16 (Chrabaszcz et al., 2017).
Dataset Splits Yes We consider four datasets: PROTEIN, NAVAL, PARKINSONS, and SLICE, and utilize HPOBench (Klein & Hutter, 2019) which tabulates, for each dataset, the MSEs resulting from all possible (62,208) configurations. Additional details are included in Appendix K.1... We utilize NASBench201 (Dong & Yang, 2020), which tabulates precomputed results from all possible 56 = 15, 625 combinations for each of the three datasets: CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), and Image Net-16 (Chrabaszcz et al., 2017).
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models or types) used for running its experiments.
Software Dependencies No The paper mentions software like XGBoost and L-BFGS, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We set γ = 1/3 across all variants and benchmarks. For candidate suggestion in the tree-based variants, we use RS with a function evaluation limit of 500 for problems with discrete domains, and DE with a limit of 2,000 for those with continuous domains.