Lookahead Bayesian Optimization with Inequality Constraints

Authors: Remi Lam, Karen Willcox

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present numerical experiments demonstrating the performance improvements of such a lookahead approach compared to several greedy BO algorithms, including constrained expected improvement (EIC) and predictive entropy search with constraint (PESC).
Researcher Affiliation Academia Remi R. Lam Massachusetts Institute of Technology Cambridge, MA rlam@mit.edu Karen E. Willcox Massachusetts Institute of Technology Cambridge, MA kwillcox@mit.edu
Pseudocode Yes Algorithm 1 Constrained Bayesian Optimization; Algorithm 2 Rollout Utility Function
Open Source Code No The paper mentions using the 'Spearmint package' and provides a link to its repository, but it does not provide a link or state that the authors are releasing the source code for their own proposed methodology.
Open Datasets No The paper evaluates on analytic functions (P1-P3) and a reacting flow model (P4), which are not traditional publicly available datasets with access information. For P4, it cites a paper about the model, not an explicit dataset.
Dataset Splits No The paper discusses evaluation budget and iterations for the optimization process but does not specify training, validation, or test dataset splits typical for machine learning tasks.
Hardware Specification No The paper mentions that 'solving large systems of PDEs can take over a day on a supercomputer' as a general statement about problem complexity, but it does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments reported in the paper.
Software Dependencies No The paper mentions using the 'Spearmint package' but does not specify a version number for it or any other software dependencies crucial for reproducibility.
Experiment Setup Yes For the rollout algorithm, we use independent zero-mean GPs with automatic relevance determination (ARD) square-exponential kernel to model each expensive-to-evaluate function. ... To compute the expectations of Eqs. 11-12, we employ Nq = 3I+1 Gauss-Hermite quadrature weights and points and we set the discount factor to γ = 0.9. Finally, at iteration n, the best value f Sn best is set to the minimum posterior mean µn(x; f) over the designs x in the training set Sn, such that the posterior mean of each constraint is feasible. ... For P1 and P2, we use N = 40 evaluations ... For P3 and P4, we use a small number of iterations N = 60