Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Multi-fidelity Bayesian Optimization with Max-value Entropy Search and its Parallelization

Authors: Shion Takeno, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, Masayuki Karasuyama

ICML 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate effectiveness of our approach by using benchmark datasets and a real-world application to materials science data.
Researcher Affiliation Academia 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 2Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan 3Department of Materials Design Innovation Engineering, Nagoya University, Aichi, Japan 4PRESTO, Japan Science and Technology Agency, Saitama, Japan 5Department of Electrical, Electronic and Computer Engineering, Gifu University, Gifu, Japan 6Center for Materials Research by Information Integration, National Institute for Material Science, Ibaraki, Japan.
Pseudocode Yes Algorithm 1 shows the procedure of MF-MES for sequential querying.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We used a synthetic function generated by MF-GPR, two benchmark functions, and a real-world dataset from materials science. The details of functions are described as follows. (...) Benchmark Functions: We used two benchmark functions called Styblinski-Tang (d = 2, M = 2), and Hart Mann6 (d = 6, M = 3). (...) Material Data: As an example of practical applications, we applied our method to the parameter optimization of a simulation model in materials science. The task is to optimize d = 2 material parameters in the simulation model (Tsukada et al., 2014).
Dataset Splits No The paper defines metrics like Simple Regret and Inference Regret but does not specify dataset splits (e.g., percentages or counts) for training, validation, and testing as commonly done in supervised learning.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'GPytorch library' and 'Adam optimizer' but does not specify their version numbers, which is required for reproducible software dependencies.
Experiment Setup Yes For each evaluation, we ๏ฌt the hyperparameters of the kernel and the noise parameter by maximizing the marginal likelihood with 1000 iterations of Adam optimizer (Kingma & Ba, 2014). For optimization of the highest ๏ฌdelity f and x , we ran L-BFGS-B 10 times with random initialization from the observed data for 200 iterations each time. (...) For the sampling of f in MES and MF-MES, we employed the RFM-based approach described in Section 4, and sampled 10 f s at every iteration.