Approximation-Aware Bayesian Optimization
Authors: Natalie Maus, Kyurae Kim, David Eriksson, Geoff Pleiss, John P. Cunningham, Jacob Gardner
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate EULBO-based SVGPs on a number of benchmark BO tasks, described in detail in Section 4.1. These tasks include standard low-dimensional BO problems, e.g., the 6D Hartmann function, as well as 7 high-dimensional and high-throughput optimization tasks. |
| Researcher Affiliation | Collaboration | Natalie Maus University of Pennsylvania nmaus@seas.upenn.edu Kyurae Kim University of Pennsylvania Geoff Pleiss University of British Columbia Vector Institute David Eriksson Meta John P. Cunningham Columbia University Jacob R. Gardner University of Pennsylvania |
| Pseudocode | Yes | Algorithm 1: EULBO Maximization Policy |
| Open Source Code | Yes | Code to reproduce all results in the paper is available at https://github.com/nataliemaus/aabo. |
| Open Datasets | Yes | Hartmann 6D. The widely used Hartmann benchmark function (Surjanovic and Bingham, 2013). Molecular design tasks (x4). We select four challenging tasks from the Guacamol benchmark suite of molecular design tasks (Brown et al., 2019) |
| Dataset Splits | No | The paper describes iterative optimization benchmarks where data is sequentially acquired. It states 'For all methods, we initialize using a set of 100 data points sampled uniformly at random in the search space.' but does not provide explicit train/validation/test splits for a pre-existing dataset in terms of percentages or sample counts. |
| Hardware Specification | Yes | All results in the paper required the use of GPU workers (one GPU per run of each method on each task). The majority of runs were executed on an internal cluster, where details are shown in Table 2, where each node was equipped with an NVIDIA RTX A5000 GPU. In addition, we used cloud compute resources for a short period leading up to the subsmission of the paper. We used 40 RTX 4090 GPU workers from runpod.io, where each GPU had approximately 24 GB of GPU memory. |
| Software Dependencies | No | We implement EULBO and baseline methods using the GPy Torch (Gardner et al., 2018) and Bo Torch (Balandat et al., 2020) packages. For this, we re-initialize the Adam states at the beginning of each BO step. The paper does not explicitly state specific version numbers for these software packages within the text. |
| Experiment Setup | Yes | The hyperparameters used in our experiments are organized in Table 1. For the full-extent of the implementation details and experimental configuration, please refer to the supplementary code. Table 1: Configurations of Hyperparameters used for the Experiments Hyperparameter Value Description πΎπ 0.001 ADAM stepsize for the query π πΎπ 0.01 ADAM stepsize for the SVGP parameters π π΅ 32 Minibatch size πΊclip 2.0 Gradient clipping threshold πepochs 30 Maximum number of epochs πfail 3 Maximum number of failure to improve π 100 Number of inducing points π0 = |π0| 100 Number of observations for initializing BO # quad. 20 Number of Gauss-Hermite quadrature points |