Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Tight Regret Bounds for Bayesian Optimization in One Dimension
Authors: Jonathan Scarlett
ICML 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time T behaves as β¦(T) and O(T log T). This gives a tight characterization up to a log T factor, and includes the ο¬rst non-trivial lower bound for noisy BO. |
| Researcher Affiliation | Academia | 1Department of Computer Science & Department of Mathematics, National University of Singapore. Correspondence to: Jonathan Scarlett <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Informal description of our algorithm, based on reducing uncertainty in epochs via repeated sampling. |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | This is a theoretical paper focusing on mathematical bounds for Bayesian optimization, and it does not use or reference any publicly available datasets for training or empirical evaluation. |
| Dataset Splits | No | This is a theoretical paper and does not involve empirical data or dataset splits for training, validation, or testing. |
| Hardware Specification | No | This is a theoretical paper. It does not describe any experiments that would require specific hardware to run. |
| Software Dependencies | No | This is a theoretical paper. It does not mention any specific software dependencies or versions required to replicate experiments, as no empirical experiments are conducted. |
| Experiment Setup | No | This is a theoretical paper. It describes the problem setup and assumptions for its theoretical analysis, but it does not detail an empirical experimental setup with hyperparameters or system-level training settings. |