Bayesian Optimization with High-Dimensional Outputs
Authors: Wesley J. Maddox, Maximilian Balandat, Andrew G. Wilson, Eytan Bakshy
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate empirically how large-scale sampling from MTGPs can aid in challenging multi-objective, constrained, and contextual Bayesian Optimization problems (Section 4). |
| Researcher Affiliation | Collaboration | Wesley J. Maddox New York University wjm363@nyu.edu Maximilian Balandat Facebook balandat@fb.com Andrew Gordon Wilson New York University andrewgw@cims.nyu.edu Eytan Bakshy Facebook eytan@fb.com |
| Pseudocode | No | The paper describes methods and procedures in narrative text and mathematical equations, but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is fully integrated into Bo Torch, see https://botorch.org/tutorials/composite_ bo_with_hogp and https://botorch.org/tutorials/composite_mtbo for tutorials. |
| Open Datasets | Yes | We consider a multi-task version of the Hartmann-6 function... We compare Matheron sampled MTGPs to batch independent MTGPs on the C2DTLZ2 [19]... and OSY [43]... Lunar Lander... from the Open AI Gym [8]... MOPTA08 benchmark problem [35]... Chemical Pollutants... originally deļ¬ned in Bliznyuk et al. [5]... Optimizing PDEs... solved in py-pde [65]... Cell-Tower Coverage: Following Dreifuerst et al. [21]... Optical Interferometer... as in Sorokin et al. [53]. |
| Dataset Splits | No | The paper describes experiments on black-box optimization functions and simulated environments where data points are queried sequentially, rather than using pre-existing datasets with explicit train/validation/test splits. |
| Hardware Specification | Yes | on a single Tesla V100 GPU (a,b) and on a single CPU (c,d). |
| Software Dependencies | No | The paper mentions software like 'Bo Torch' and 'py-pde' but does not provide specific version numbers for these or other ancillary software components. |
| Experiment Setup | Yes | Following Daulton et al. [17] we use both q Par EGO and q EHVI with q = 2, for C2DTLZ2 and optimize for 200 iterations... using a batch size of q = 10, optimizing for 30 iterations... initialize with 150 data points and use Tu RBO with Thompson sampling with batches of q = 20 for a total of 1000 function evaluations and repeat over 30 trials. |