Bayesian Optimization with Switching Cost: Regret Analysis and Lookahead Variants

Authors: Peng Liu, Haowei Wang, Wei Qiyu

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition, the empirical performance of the proposed algorithm is tested based on both synthetic and real data experiments, and it shows that our cost-aware non-myopic algorithm performs better than other popular alternatives.
Researcher Affiliation Collaboration Peng Liu1 , Haowei Wang2 , Wei Qiyu3 1Singapore Management University 2Rice-Rick Digitalization 3Shanghai University liupeng@smu.edu.sg, haowei wang@ricerick.com, qywei@shu.edu.cn
Pseudocode Yes Algorithm 1: dist UCB: distance-adjusted UCB
Open Source Code No The paper does not provide a link to open-source code or explicitly state that the code is publicly available.
Open Datasets Yes Using a neural network model, we empirically evaluate dist UCB on two hyperparameter tuning tasks for training a two-layer feed-forward neural network on two popular UCI datasets: breast cancer and spam
Dataset Splits Yes For both data sets, we allocate 70% to training and 30% to the test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU models, CPU types, or memory).
Software Dependencies No The paper mentions software like PyTorch in its references, but it does not specify any software dependencies or their version numbers used in the experimental setup.
Experiment Setup Yes The total iteration budget N for each experiment is set to N = n0 + 100, where n0 denotes the number of initial design points and is set as 20, 40, and 60 for the three synthetic functions, respectively. In addition, we add a homogeneous noise with a standard normal distribution with a standard deviation of 0.1, and set up 32 simulations in each Monte Carlo approximation to keep the same setting as EIpu. For a fair comparison, each method s number of lookahead steps is set to 3. ... We consider tuning four hyperparameters: batch size, initial learning rate, learning rate schedule, and the number of hidden dimensions. For each of the four hyperparameters to be adjusted, the adjustment range is given: [32, 128], [1e 6, 1.0], [1e 6, 1.0], [0.5, 4]. ... Each iteration repeats 5 times.