Bayesian Optimization with a Finite Budget: An Approximate Dynamic Programming Approach

Authors: Remi Lam, Karen Willcox, David H. Wolpert

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present numerical experiments showing that the resulting algorithm for optimization with a finite budget outperforms several popular Bayesian optimization algorithms. In Sec. 6, we numerically investigate the proposed algorithm and present our conclusions in Sec. 7.
Researcher Affiliation Academia Remi R. Lam Massachusetts Institute of Technology Cambridge, MA rlam@mit.edu Karen E. Willcox Massachusetts Institute of Technology Cambridge, MA kwillcox@mit.edu David H. Wolpert Santa Fe Institute Santa Fe, NM dhw@santafe.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper uses GP-generated objective functions and references a website for 'test functions' but does not provide concrete access information (link, DOI, repository, or formal citation with authors/year) for a specific dataset.
Dataset Splits No The paper describes how initial training points are generated and how the training set is augmented during the BO process, but it does not specify explicit training/validation/test dataset splits (e.g., percentages, sample counts) from a fixed dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We use a zero-mean GP with square-exponential kernel (hyper-parameters: maximum variance σ2 = 4, length scale L = 0.1, noise variance λ = 10 3) to generate 24 objective functions defined on X = [0, 1]2. All algorithms are given a budget of N = 15 evaluations. All algorithms use the same kernel and hyper-parameters as those used to generate the objective functions. parameters of the rolling horizon h {2, 3, 4, 5} and discount factor γ {0.5, 0.7, 0.9, 1.0}.