Does Sparsity Help in Learning Misspecified Linear Bandits?

Authors: Jialin Dong, Lin Yang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We establish novel algorithms that obtain O(ε)-optimal actions by querying O(ε mdm) actions, where m is the sparsity parameter. For fixed sparsity m, the algorithm finds an O(ε)-optimal action with poly(d/ε) queries, breaking the O(ε d) barrier. We establish information-theoretical lower bounds to show that our upper bounds are nearly tight.
Researcher Affiliation Academia Department of Electrical and Computer Engineering, University of California, Los Angeles, USA. Correspondence to: Jialin Dong, Lin F. Yang <jialind@g.ucla.edu, linyang@ee.ucla.edu>.
Pseudocode Yes Algorithm 1 Parameter Elimination (Page 3), Algorithm 2 (ε m)-free Algorithm (Page 4), Algorithm 3 poly(m)-sample-complexity Algorithm (Page 5).
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not involve empirical training on a dataset, nor does it provide access information for any dataset.
Dataset Splits No The paper is theoretical and does not involve dataset splitting for validation.
Hardware Specification No The paper is theoretical and does not discuss hardware used for any experiments.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup, hyperparameters, or training configurations.