Towards Practical Preferential Bayesian Optimization with Skew Gaussian Processes

Authors: Shion Takeno, Masahiro Nomura, Masayuki Karasuyama

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental towards building a more practical preferential BO, we develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.Numerical experiments on 12 benchmark functions show that the proposed method achieves better or at least competitive performance in both terms of computational efficiency and sample complexity compared with Gaussian approximation-based preferential BO (Gonz alez et al., 2017; Siivola et al., 2021; Fauvel & Chalk, 2021) and MCMC-based preferential BO (Benavoli et al., 2021a;b), respectively.
Researcher Affiliation Collaboration Shion Takeno 1 2 3 Masahiro Nomura 2 Masayuki Karasuyama 1 1Nagoya Institute of Technology, Aichi, Japan 2Cyber Agent, Tokyo, Japan 3RIKEN AIP, Tokyo, Japan.
Pseudocode Yes Algorithm 1 shows the procedure of HB.Algorithm 2 shows the pseudo-code.
Open Source Code Yes Our experimental codes are publicly available at https: //github.com/Cyber Agent AILab/preferential BO.
Open Datasets Yes We employed the 12 benchmark functions.All the details of benchmark functions are shown in https://www.sfu.ca/ ssurjano/optimization.html.
Dataset Splits No The paper evaluates the optimization algorithms on benchmark functions. It mentions "10 random initialization" for regret calculation but does not specify explicit train/validation/test dataset splits in the conventional sense for supervised learning tasks.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions general software components like 'RBF kernel' and 'Python', and refers to an implementation by other authors, but does not provide specific version numbers for key software dependencies or libraries.
Experiment Setup Yes For preferential GP models, we use RBF kernel with automatic relevance determination (Rasmussen & Williams, 2005), whose lengthscales are selected by marginal likelihood maximization per 10 iterations, and set fixed noise variance σ2 noise = 10 4.For the parameters for Gibbs sampling, burn-in is 1000, and the MC sample size for Duel UCB and EIIG is 1000 (thinning is not performed).For HB-UCB, we use β1/2 = 2.