Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization

Authors: Hao Wang, Kaifeng Yang, Michael Affenzeller

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we show that on many widely-applied bi-objective test problems, ε-Po HVI significantly outperforms other related acquisition functions, e.g., ε-Po I, and expected hypervolume improvement, when the GP model exhibits a large the prediction uncertainty.
Researcher Affiliation Academia 1Leiden University, Leiden, The Netherlands 2University of Applied Sciences, Hagenberg, Austria.
Pseudocode No The paper contains mathematical derivations and descriptions of methods, but no structured pseudocode or algorithm blocks.
Open Source Code Yes Our source code is available at https://github.com/wangronin/HVI-distribution
Open Datasets Yes We investigate the empirical performance of ε-Po HVI against ε-Po I and EHVI on three sets of test problems: (1) the classical bi-objective ZDT problems (Zitzler et al., 2000)... (2) WOSGZ1-8 problems (Wang et al., 2019)... (3) a real-world problem four bar truss design (Cheng & Li, 1999; Tanabe & Ishibuchi, 2020) (denoted as RE)...
Dataset Splits No We initialize the BO algorithm with min(60, 6d) points generated with Latin Hypercube sampling and terminate the algorithm at 170 iterations. The paper describes the initialization and iteration process for Bayesian Optimization, which does not typically use fixed training, validation, and test splits like in supervised learning.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, or memory specifications) are mentioned for the experimental setup.
Software Dependencies No The paper mentions using "DGEMO algorithmic framework" and "Matérn 5/2 kernel" and "CMA-ES algorithm", but does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes We initialize the BO algorithm with min(60, 6d) points generated with Latin Hypercube sampling and terminate the algorithm at 170 iterations. We build two independent Gaussian processes for each objective with Mat ern 5/2 kernel. We maximize the acquisition function in each iteration with covariance matrix adaptation evolution strategy (CMA-ES) algorithm (Hansen, 2006). ... ε-Po HVI-scaling determines the parameter εt at iteration t with the schedule: εt = ε0 exp ( ct), where ε0 = 0.05, c = 0.02. ... ε-Po HVI-smoothing exponentially smooths of the hypervolume improvement measured in the optimization: εt+1 = α (HV(Pt, r) HV(Pt 1, r)) + (1 α)εt, where α = 0.5 and ε0 = 0.05.