Constrained Bayesian Optimization under Partial Observations: Balanced Improvements and Provable Convergence

Authors: Shengbo Wang, Ke Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed method is empirically studied on both synthetic and real-world problems. The results demonstrate the competitiveness of our method for solving POCOPs. Experiment Setup In this section, we present the experimental settings used in our empirical study. Experiment Results The optimization trajectories of all experiments are given in Figures 4 and 5. In addition, the median best-evaluated values (BOVs) and average ratios of feasible evaluations (ROFs) of different algorithms are presented in Tables 1 and 2.
Researcher Affiliation Academia Shengbo Wang1, Ke Li2 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China 2Department of Computer Science, University of Exeter, EX4 4RN, Exeter, UK
Pseudocode Yes Algorithm 1: Pseudo code of CBOB
Open Source Code Yes The source code of our project is available1. 1https://github.com/COLA-Laboratory/CBOB
Open Datasets Yes Our experiments consider various optimization tasks, including synthetic problems, engineering design cases, hyperparameter optimization (HPO) problems based on scikitlearn (Pedregosa et al. 2011), and reinforcement learning tasks based on Open AI Gym (Brockman et al. 2016), to constitute our benchmark suite. In addition, we consider the following two scenarios of POCOPs.
Dataset Splits No No explicit train/validation/test dataset splits (e.g., percentages, sample counts, or cross-validation details) are provided in the paper's main text for reproducibility. The paper mentions '11n initial samples' and '100 function evaluations (FEs)' but these refer to the optimization budget and initial sampling, not data partitioning for model training/validation.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions software like 'scikitlearn (Pedregosa et al. 2011)' and 'Open AI Gym (Brockman et al. 2016)', and that 'All algorithms are implemented according to their opensource code (Eriksson and Poloczek 2021; Takeno et al. 2022)'. However, it does not provide specific version numbers for these software components or other libraries, which is necessary for reproducible ancillary software details.
Experiment Setup Yes For CBOB with equation (10), we fix β = 1.96 to obtain a 95% confidence level. Each experiment is independently repeated 20 times with shared random seeds. For all tasks, the Sobol sequence is used to generate 11n initial samples, then 100 function evaluations (FEs) are performed in each experiment. Detailed settings of all algorithms and benchmark problems are presented in Section D of the supplemental document.