Communication-Constrained Bandits under Additive Gaussian Noise
Authors: Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | For this setting, we derive an information-theoretic lower bound of Ω q KT SNR 1 on the minimax regret of any scheme, where SNR := P σ2 , and K and T are the number of arms and time horizon, respectively. Furthermore, we propose a multi-phase bandit algorithm, UE-UCB++, which matches this lower bound to a minor additive factor. |
| Researcher Affiliation | Academia | 1National University of Singapore. Correspondence to: Prathamesh Mayekar, Jonathan Scarlett, Vincent Y. F. Tan <pratha22@nus.edu.sg, scarlett@comp.nus.edu.sg, vtan@nus.edu.sg>. |
| Pseudocode | Yes | Algorithm 1. UCB0 bandit algorithm along with CAS encoding algorithm, Algorithm 2. UE-UCB Algorithm along with the CAS encoding algorithm, Algorithm 3. UE-UCB++ Algorithm along with the CAS encoding algorithm |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of source code for the described methodology. |
| Open Datasets | No | The paper defines abstract reward distributions and bandit instances (e.g., 'subgaussian distributions with variance factor 1') for theoretical analysis, but does not use or provide access information for any publicly available or open dataset. |
| Dataset Splits | No | The paper is theoretical and does not describe the use of real datasets, therefore no dataset splits for training, validation, or testing are specified. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention specific software dependencies or version numbers. |
| Experiment Setup | No | The paper is theoretical and does not contain details about an experimental setup, including hyperparameters or training configurations. |