Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization
Authors: Samuel Daulton, Maximilian Balandat, Eytan Bakshy
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation demonstrates that q EHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time. |
| Researcher Affiliation | Industry | Samuel Daulton Facebook sdaulton@fb.com Maximilian Balandat Facebook balandat@fb.com Eytan Bakshy Facebook ebakshy@fb.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Acquisition functions are available as part of the open-source library Bo Torch [5]. Code is available at https://github.com/pytorch/botorch. |
| Open Datasets | Yes | For synthetic problems, we consider the Branin-Currin problem (d = 2, M = 2, convex Pareto front) [6] and the C2-DTLZ2 (d = 12, M = 2, V = 1, concave Pareto front), which is a standard constrained benchmark from the MO literature [16] |
| Dataset Splits | No | The paper does not specify training, validation, and test dataset splits in the conventional sense for supervised learning tasks. It describes evaluation budgets and number of trials for optimization problems. |
| Hardware Specification | Yes | Table 1: Acquisition Optimization wall time in seconds on a CPU (2x Intel Xeon E5-2680 v4 @ 2.40GHz) and a GPU (Tesla V100-SXM2-16GB). |
| Software Dependencies | No | While BoTorch is mentioned as an open-source library used, specific version numbers for software dependencies (e.g., BoTorch, PyTorch) are not provided in the paper's main text. |
| Experiment Setup | Yes | Both plots show optimization performance on a DTLZ2 problem (d = 6, M = 2) with a budget of 100 evaluations (plus the initial quasi-random design). We report means and 2 standard errors across 20 trials. |