Bayesian Optimization with Inequality Constraints
Authors: Jacob Gardner, Matt Kusner, Zhixiang, Kilian Weinberger, John Cunningham
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on simulated and real data, demonstrating that constrained Bayesian optimization can quickly find optimal and feasible points, even when small feasible regions cause standard methods to fail. ... We evaluate our method, which we call constrained Bayesian Optimization (c BO) on two synthetic tasks and two real world applications. |
| Researcher Affiliation | Academia | Washington University in St. Louis, 1 Brookings Dr., St. Louis, MO 63130 Columbia University, 116th St and Broadway, New York, NY 10027 |
| Pseudocode | No | No structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We will release our code and all scripts to reproduce the results in this section at http://tinyurl.com/kgj56vy. |
| Open Datasets | Yes | Table 1 lists datasets such as YALEFACES, COIL, ISOLET, USPS, LETTERS, ADULT*, W8A*, MNIST*. Table 2 lists datasets such as SPAM, MAGIC*, ADULT*, W8A*, IJCNN1*, FOREST*. Many of these (e.g., USPS, MNIST, UCI datasets linked via tinyurl) are well-known publicly available datasets. |
| Dataset Splits | No | The paper mentions 'minimize the validation error' and 'validation evaluation time', but does not provide specific dataset split percentages, sample counts, or explicit instructions for how the datasets were partitioned into training, validation, and test sets. It mentions 'leave-one-out (LOO) classification error' which is a form of cross-validation, but does not detail overall data splits. |
| Hardware Specification | No | The paper mentions 'Computations were performed via the Washington University Center for High Performance Computing' but does not provide specific hardware details such as CPU/GPU models or memory specifications. |
| Software Dependencies | No | The paper states 'Our implementation is written in MATLABTM.' It does not provide version numbers for MATLAB or any other software libraries or dependencies used. |
| Experiment Setup | No | The paper mentions 'All GP hyperparameters were selected by maximizing the marginal likelihood' and specifies the number of evaluations allowed (30 or 100), but does not provide concrete hyperparameter values (e.g., learning rates, batch sizes, epochs) for the underlying models or the Bayesian optimization process itself beyond general descriptions. |