Multi-objective Bayesian Optimization using Pareto-frontier Entropy

Authors: Shinya Suzuki, Shion Takeno, Tomoyuki Tamura, Kazuki Shitara, Masayuki Karasuyama

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical experiments show effectiveness of PFES through several benchmark datasets, and real-word datasets from materials science. We compared PFES with Par EGO, EHI, SMSego, and MESMO. To evaluate performance, we used the hyper-volume of the region dominated by Pareto-frontier, which is a standard evaluation measure in MOO. For the kernel function in all the methods, we employed the Gaussian kernel k(x, x ) = exp( x x 2 2/(2σ2)).
Researcher Affiliation Academia 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 2Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan 3Department of Physical Science and Engineering, Nagoya Institute of Technology, Aichi, Japan 4Center for Materials Research by Information Integration, National Institute for Material Science, Ibaraki, Japan 5Joining and Welding Research Institute, Osaka University, Osaka, Japan 6Nanostructures Research Laboratory, Japan Fine Ceramics Center, Aichi, Japan 7PRESTO, Japan Science and Technology Agency, Saitama, Japan.
Pseudocode No The paper describes computational procedures (e.g., QHV algorithm, NSGA-II) but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of source code for the described methodology or a link to a code repository.
Open Datasets No For evaluating the decoupled acquisition function, we used two real-world datasets from computational materials science. ... The Bi2O3 and LLTO data are collected based on quantum- and classicalmechanics, respectively.
Dataset Splits No Each experiment run 10 times with a different set of initial observations which were randomly selected 5 points.
Hardware Specification No The paper reports computational time in Table 1, but does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software components such as NSGA-II, DIRECT algorithm, and QHV algorithm, but does not provide specific version numbers for these or any other ancillary software dependencies.
Experiment Setup Yes For the kernel function in all the methods, we employed the Gaussian kernel k(x, x ) = exp( x x 2 2/(2σ2)). The samplings of F in PFES and X in MESMO, which we call Pareto sampling, were performed 10 times, respectively. For Pareto sampling, NSGA-II was applied to functions generated from RFM with 500 basis functions, and we set the maximum size of Pareto set as 50 by following (Hernandez Lobato et al., 2016).