Variance-Aware Sparse Linear Bandits

Authors: Yan Dai, Ruosong Wang, Simon Shaolei Du

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we present the first variance-aware regret guarantee for sparse linear bandits: e O q d PT t=1 σ2 t + 1 , where σ2 t is the variance of the noise at the t-th round. This bound naturally interpolates the regret bounds for the worst-case constant-variance regime (i.e., σt Ω(1)) and the benign deterministic regimes (i.e., σt 0). To achieve this variance-aware regret guarantee, we develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits in a black-box manner.
Researcher Affiliation Academia Yan Dai IIIS, Tsinghua University, Ruosong Wang University of Washington, Simon S. Du University of Washington
Pseudocode Yes Our framework VASLB is presented in Algorithm 1.
Open Source Code No The paper does not provide an unambiguous statement or link indicating the release of source code for the described methodology.
Open Datasets No The paper describes theoretical work and does not use or refer to any specific dataset for training or evaluation.
Dataset Splits No The paper presents theoretical analysis and does not involve experimental validation with dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for computations or experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers) used for running experiments.
Experiment Setup No The paper focuses on theoretical development and analysis of algorithms, thus it does not provide details of an experimental setup such as hyperparameters or training configurations.