Batch Bayesian optimisation via density-ratio estimation with guarantees

Authors: Rafael Oliveira, Louis Tiao, Fabio T. Ramos

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents experiments assessing the theoretical results and demonstrating the practical performance of batch BORE on a series of global optimisation benchmarks. We compared our methods against GP-based BO baselines in both experiments sets. Additional experimental results, including the sequential setting (Appendix E), a description of the experiments setup (Appendix E), and further discussions on theoretical aspects can be found in the supplementary material.6
Researcher Affiliation Collaboration Rafael Oliveira1,2 rafael.oliveira@sydney.edu.au Louis C. Tiao3 louis.tiao@sydney.edu.au Fabio Ramos3,4 fabio.ramos@sydney.edu.au 1Brain and Mind Centre, the University of Sydney, Australia 2ARC Training Centre in Data Analytics for Resources and Environments, Australia 3School of Computer Science, the University of Sydney, Australia 4NVIDIA, USA
Pseudocode Yes Algorithm 1: BORE 1 for t {1, . . . , T} do 2 τ := ˆΦ 1 t 1(γ) 3 zi := I[yi τ], i {1, . . . , t 1} 4 Dt 1 := {xi, zi}t 1 i=1 5 ˆπt argminπ L[π| Dt 1] 6 xt argmaxx X ˆπt 1(x) 7 yt := f(xt) + ϵt 8 end
Open Source Code No Code will be made available at https://github.com/rafaol/batch-bore-with-guarantees
Open Datasets Yes Real-data benchmarks. Lastly, we compared the sequential version of BORE++ against BORE and other baselines, including traditional BO methods, such as GP-UCB and GP-EI [1], the Treestructured Parzen Estimator (TPE) [15], and random search, on real-data benchmarks. In particular, we assessed the algorithms on some of the same benchmarks present in the original BORE paper [9]. ... neural architecture search on MNIST data
Dataset Splits No Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See supplement.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] Our focus is on theory assessments rather than computational comparisons.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See supplement.