Tight Regret Bounds for Bayesian Optimization in One Dimension

Authors: Jonathan Scarlett

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time T behaves as Ω(T) and O(T log T). This gives a tight characterization up to a log T factor, and includes the first non-trivial lower bound for noisy BO.
Researcher Affiliation Academia 1Department of Computer Science & Department of Mathematics, National University of Singapore. Correspondence to: Jonathan Scarlett <scarlett@comp.nus.edu.sg>.
Pseudocode Yes Algorithm 1 Informal description of our algorithm, based on reducing uncertainty in epochs via repeated sampling.
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No This is a theoretical paper focusing on mathematical bounds for Bayesian optimization, and it does not use or reference any publicly available datasets for training or empirical evaluation.
Dataset Splits No This is a theoretical paper and does not involve empirical data or dataset splits for training, validation, or testing.
Hardware Specification No This is a theoretical paper. It does not describe any experiments that would require specific hardware to run.
Software Dependencies No This is a theoretical paper. It does not mention any specific software dependencies or versions required to replicate experiments, as no empirical experiments are conducted.
Experiment Setup No This is a theoretical paper. It describes the problem setup and assumptions for its theoretical analysis, but it does not detail an empirical experimental setup with hyperparameters or system-level training settings.