Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Bandit Linear Control
Authors: Asaf Cassel, Tomer Koren
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We present a new and eο¬cient algorithm that, for strongly convex and smooth costs, obtains regret that grows with the square root of the time horizon ν. We also give extensions of this result to general convex, possibly non-smooth costs, and to non-stochastic system noise. A key component of our algorithm is a new technique for addressing bandit optimization of loss functions with memory. |
| Researcher Affiliation | Academia | Asaf Cassel School of Computer Science Tel Aviv University EMAIL Tomer Koren School of Computer Science Tel Aviv University EMAIL |
| Pseudocode | Yes | Algorithm 1 Bandit Linear Control Algorithm 2 BCO Reduction |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not describe the use of any datasets for training or experimentation. |
| Dataset Splits | No | The paper is theoretical and does not describe dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any specific experimental setup details, hyperparameters, or training configurations. |