Bandit Linear Control

Authors: Asaf Cassel, Tomer Koren

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present a new and efficient algorithm that, for strongly convex and smooth costs, obtains regret that grows with the square root of the time horizon 푇. We also give extensions of this result to general convex, possibly non-smooth costs, and to non-stochastic system noise. A key component of our algorithm is a new technique for addressing bandit optimization of loss functions with memory.
Researcher Affiliation Academia Asaf Cassel School of Computer Science Tel Aviv University acassel@mail.tau.ac.il Tomer Koren School of Computer Science Tel Aviv University tkoren@tauex.tau.ac.il
Pseudocode Yes Algorithm 1 Bandit Linear Control Algorithm 2 BCO Reduction
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not describe the use of any datasets for training or experimentation.
Dataset Splits No The paper is theoretical and does not describe dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any specific experimental setup details, hyperparameters, or training configurations.