Learning Regions of Interest for Bayesian Optimization with Adaptive Level-Set Estimation

Authors: Fengxue Zhang, Jialin Song, James C Bowden, Alexander Ladd, Yisong Yue, Thomas Desautels, Yuxin Chen

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate empirically the effectiveness of BALLET on both synthetic and real-world optimization tasks. and 4. Experiment
Researcher Affiliation Collaboration Fengxue Zhang 1 Jialin Song 2 James Bowden 3 Alexander Ladd 4 Yisong Yue 3 Thomas A. Desautels 4 Yuxin Chen 1 1Departmet of Computer Science, University of Chicago, Illinois, U.S. 2Nvidia, California, U.S. 3California Institute of Technology, California, U.S. 4Lawrence Livermore National Laboratory, California, U.S..
Pseudocode Yes Algorithm 1 Bayesian Optimization with Adaptive Level-Set Estimation (BALLET)
Open Source Code No The paper refers to open-sourced implementations of baseline algorithms (LA-MCTS and Tu RBO) but does not provide a statement or link for its own proposed method (BALLET).
Open Datasets Yes Water Converter Configuration-32D. This UCI dataset we use consists of positions and absorbed power outputs of wave energy converters (WECs) from the southern coast of Sydney. and Nanophotonics Structure Design-5D. We wish to optimize a weighted figure of merit quantifying the fitness of the transmission spectrum for hyperspectral imaging as assessed by a numerical solver (Song et al., 2018). and GB1-118D. ... (Wu et al., 2019). and Rosetta Protein Design-86D. ... (Desautels et al., 2020; 2022).
Dataset Splits No The paper mentions 'warm-up' sets for initial observations but does not provide specific details on training, validation, and test dataset splits with percentages or counts for model evaluation.
Hardware Specification No The paper does not specify the exact hardware components such as GPU or CPU models, or details of cloud computing resources used for the experiments.
Software Dependencies No The paper mentions various software components and models like Deep Kernel Learning, KISS-GP, and Auto-Encoder, but it does not provide specific version numbers for any of these software dependencies.
Experiment Setup Yes The neural network consists of three hidden layers with 1000, 500, and 50 neurons, and Re LU non-linearity respectively. The output layer is one-dimensional. We use squared exponential kernel or linear kernel as the base kernel... and Thompson Sampling (Chapelle & Li, 2011) for the acquisition function α. For each of the algorithms, the same 10 randomly picked points serve as the warm-up set. For BALLET-ICI, we set δ in Lemma 1 to be 0.2. Through the experiments, we fix β1/2 t = 0.2 only when identifying ROIs as in line 4 of Algorithm 1.