Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian

Authors: Victor Picheny, Robert B. Gramacy, Stefan Wild, Sebastien Le Digabel

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 provides empirical comparisons, and Section 5 concludes. Here we describe three test problems, each mixing challenging elements from traditional unconstrained blackbox optimization benchmarks, but in a constrained optimization format. We run our optimizers on these problems 100 times under random initializations. In the case of our GP surrogate comparators, this initialization involves choosing random space-filling designs. Our primary means of comparison is an averaged (over the 100 runs) measure of progress defined by the best valid value of the objective for increasing budgets (number of evaluations of the blackbox), n. Figure 1 shows progress over repeated solves with a maximum budget of 40 blackbox evaluations.
Researcher Affiliation Academia Victor Picheny MIAT, Université de Toulouse, INRA Castanet-Tolosan, France victor.picheny@toulouse.inra.fr Robert B. Gramacy Virginia Tech Blacksburg, VA, USA rbg@vt.edu Stefan Wild Argonne National Laboratory Argonne, IL, USA wildmcs.anl.gov Sébastien Le Digabel École Polytechnique de Montréal Montréal, QC, Canada sebastien.le-digabel@polymtl.ca
Pseudocode Yes Algorithm 1: Basic augmented Lagrangian method Require: λ0 0, ρ0 > 0 1: for k = 1, 2, . . . do 2: Let xk (approximately) solve (4) 3: Set λk j =max{0, λk 1 j + 1 ρk 1 gj(xk)}, j = 1, . . . , m 4: If g(xk) 0, set ρk = ρk 1; else, set ρk = 1 5: end for
Open Source Code Yes Code supporting all methods in this manuscript is provided in two open-source R packages: la GP [8] and Dice Optim [19], both on CRAN [22].
Open Datasets No The paper defines test problems (LSQ problem, Linear-Ackley-Hartman, GBSP) which are functions or mathematical specifications, not explicitly mentioned as publicly available datasets with links, DOIs, or specific repositories for data points.
Dataset Splits No The paper does not specify standard training, validation, and test dataset splits with percentages or sample counts. It describes an iterative experimental process over 'blackbox evaluations' for 'test problems'.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments.
Software Dependencies Yes Code supporting all methods in this manuscript is provided in two open-source R packages: la GP [8] and Dice Optim [19], both on CRAN [22]. Dice Optim: Kriging-Based Optimization for Computer Experiments, 2016. R package version 2.0.
Experiment Setup Yes We run our optimizers on these problems 100 times under random initializations. Random initial designs of size n = 5 were used, as indicated by the vertical-dashed gray line. The left-hand plot in Figure 1 tracks the average best valid value of the objective found over the iterations, using the progress metric described above. In such cases we choose a tolerance ϵ = 10^−2.