Pareto Front-Diverse Batch Multi-Objective Bayesian Optimization
Authors: Alaleh Ahmadianshalchi, Syrine Belakaria, Janardhan Rao Doppa
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on multiple MOO benchmarks demonstrate that PDBO outperforms prior methods in terms of both the quality and diversity of Pareto solutions. [...] Experimental evaluation of PDBO and baselines on multiple benchmark MOO problems. |
| Researcher Affiliation | Academia | 1 School of EECS, Washington State University 2 Computer Science Department, Stanford University a.ahmadianshalchi@wsu.edu, syrineb@stanford.edu, jana.doppa@wsu.edu |
| Pseudocode | Yes | Algorithm 1 provides a pseudocode with high-level steps of the PDBO approach. The DPP-SELECT and ADAPTIVEAF-SELECT represent the second and third key steps. The details of these methods and their corresponding pseudocodes are provided in Sections 4.1 and 4.2, respectively. |
| Open Source Code | Yes | The code for PDBO is publicly available at https://github.com/Alaleh/PDBO. |
| Open Datasets | Yes | Benchmarks. We use several synthetic problems: ZDT-1, ZDT-2, ZDT-3 (Zitzler, Deb, and Thiele 2000), DTLZ-1, DTLZ-3, DTLZ-5 (Deb et al. 2005) and three real wold problems: the gear train design problem (Deb and Srinivasan 2006; Konakovic Lukovic, Tian, and Matusik 2020), SWLLVM (Siegmund et al. 2012) and Unmanned aerial vehicle power system design (Belakaria et al. 2020b). |
| Dataset Splits | No | The paper conducts experiments on benchmarks and describes the number of function evaluations and iterations but does not specify traditional training, validation, or test dataset splits, which is common for Bayesian Optimization where data is acquired iteratively. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for running the experiments (e.g., specific GPU/CPU models, memory specifications). |
| Software Dependencies | Yes | To solve the constrained optimization problem in the DPP algorithm, we utilize an implementation of the SQP method (Lalee, Nocedal, and Plantenga 1998; Nocedal and Wright 2006) from the Python Sci Py library (Virtanen et al. 2020). |
| Experiment Setup | Yes | Experimental Setup. All experiments are initialized with five random inputs/evaluations and run for at least 250 function evaluations. We conduct experiments with four different batch sizes B {2, 4, 8, 16} and adjust the number of iterations accordingly. For instance, when using a batch size of two, we run the algorithm for 125 iterations. Each experiment is repeated 25 times, and we report the average and standard deviation of the hypervolume indicator and the DPF metric. [...] We set the hyperparameters of PDBO to γ = 0.7 and τ = 4 as recommended by (Hoffman et al. 2011; Vasconcelos et al. 2019). We define the AF portfolio as P = {EI, TS, UCB, ID}. |