Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization
Authors: Kwang-Sung Jun, Jungtaek Kim
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation in diverse Bayesian optimization tasks shows that our proposed algorithms demonstrate better or comparable performance compared to existing methods. and 4. Experiments We conduct several experiments with two noise types... |
| Researcher Affiliation | Academia | Kwang-Sung Jun 1 Jungtaek Kim 2 1University of Arizona 2University of Pittsburgh. |
| Pseudocode | Yes | Algorithm 1 LOSAN (Linear Optimism with Semi Adaptivity to Noise) and Algorithm 2 LOFAV (Linear Optimism with Full Adaptivity to Variance) |
| Open Source Code | Yes | The implementation of our proposed methods is available at https://github.com/jungtaekkim/LOSAN-LOFAV. |
| Open Datasets | Yes | We utilize NATS-Bench (Dong et al., 2021)... |
| Dataset Splits | No | The paper describes performing experiments over multiple rounds and random trials, but does not provide specific details on train/validation/test dataset splits (e.g., percentages or counts) or a clear methodology for reproducible data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Gaussian process regression' and 'Multi-start L-BFGS-B' but does not specify their version numbers or other software dependencies with explicit version information. |
| Experiment Setup | Yes | To fairly compare our algorithms to OFUL, we perform each experiment over 50 rounds where S = 1.0, d = 32, |Xt| = 128, and σ0 or R = 1.0. and Each original point is transformed into a 128-dimensional random feature... |