BINOCULARS for efficient, nonmyopic sequential experimental design
Authors: Shali Jiang, Henry Chai, Javier Gonzalez, Roman Garnett
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct thorough experiments demonstrating that the proposed method significantly outperforms the myopic baselines and is competitive with (if not better than) state-of-the-art nonmyopic alternatives, while being much more efficient. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Engineering, Washington University in Saint Louis, Saint Louis, Missouri, USA 2Microsoft Research Cambridge, Cambridge, UK. |
| Pseudocode | Yes | Algorithm 1 BINOCULARS |
| Open Source Code | No | Complete details can be found in our attached code. (Section 6.1) and Complete details of our implementation can be found in our attached code. (Section 6.2). The paper mentions "attached code" which typically refers to supplementary material for review/publication, but does not provide a public URL or explicit statement of public release for their specific implementation of BINOCULARS. |
| Open Datasets | Yes | We implemented our nonmyopic BO policy and all baselines using Bo Torch,4 which contains efficient EI and q-EI implementations. ... We present experiments for two rollout variants: 2.R.10 and 3.R.3. ... We present results on hyperparameter tuning functions used by Snoek et al. (2012); Wang and Jegelka (2017); Malkomes and Garnett (2018). These functions are evaluated on a predefined grid... (Section 6.1, and reference to "https://www.sfu.ca/~ssurjano/optimization.html" and "https://www.sfu.ca/~ssurjano/integration.html"). The paper uses well-known benchmark functions as datasets, providing URLs for some of them. |
| Dataset Splits | No | For all experiments, we start with 2d randomly-sampled observations and perform 20d further iterations, where d is the function s dimensionality. Unless otherwise noted, all results presented are aggregated over 100 repeats with different random initializations. The paper describes the experimental setup but does not specify a distinct validation set or explicit train/validation/test splits in the conventional sense. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running experiments are provided in the paper. |
| Software Dependencies | No | We implemented our nonmyopic BO policy and all baselines using Bo Torch,4 which contains efficient EI and q-EI implementations. (Section 6.1) and We implemented our nonmyopic BQ policy and all baselines using the GPML MATLAB package.7 (Section 6.2). Software packages are mentioned, but specific version numbers are not provided. |
| Experiment Setup | Yes | We use GPs with a constant mean and a Matérn 5/2 ARD kernel to model the objective function, the default in Bo Torch. We tune hyperparameters every iteration by maximizing the marginal likelihood using L-BFGS-B. We also maximize the q-EI acquisition function with L-BFGS-B. (Section 6.1) |