Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs

Authors: Raul Astudillo, Daniel Jiang, Maximilian Balandat, Eytan Bakshy, Peter Frazier

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental An empirical evaluation on a number of synthetic and real-world experiments demonstrates that B-MS-EI performs favorably with respect to other acquisition functions that are widely-used in settings with heterogeneous costs.
Researcher Affiliation Collaboration Raul Astudillo Cornell University ra598@cornell.edu Daniel R. Jiang Facebook drjiang@fb.com Maximilian Balandat Facebook balandat@fb.com Eytan Bakshy Facebook ebakshy@fb.com Peter I. Frazier Cornell University pf98@cornell.edu
Pseudocode No The paper describes algorithms and formulations but does not include a distinct pseudocode block or an algorithm box.
Open Source Code Yes An implementation of our algorithms and numerical experiments can be found at https://github.com/RaulAstudillo06/BudgetedBO.
Open Datasets Yes The LDA and CNN problems use publicly available data sets from the HPOLib (The HPOlib authors, 2014) and HPOLib1.5 (The HPOlib1.5 authors, 2017) hyperparameter optimization libraries. ... The second data set was obtained by training a 3-layer convolutional neural network on the CIFAR-10 dataset... Finally, the RF-Boston problem considers optimization of the 5-fold cross validation error when using a random forest (RF) regressor on the Boston dataset, both of which are from sklearn (Pedregosa et al., 2011).
Dataset Splits Yes The RF-Boston problem considers optimization of the 5-fold cross validation error when using a random forest (RF) regressor on the Boston dataset
Hardware Specification No The paper mentions
Software Dependencies No The paper mentions
Experiment Setup Yes For instance, in the context of hyperparameter optimization of machine learning algorithms (Swersky et al., 2013; Wu et al., 2020), certain values of hyperparameters such as the learning rate may yield longer training times. ... We tune the following hyperparameters of the RF regressor: n_estimators [1, 256] (rounded to the nearest integer), max_depth [1, 64] (rounded to nearest integer), and max_features [0.1, 1] (on a log10 scale).