Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Regret Bounds for Information-Directed Reinforcement Learning
Authors: Botao Hao, Tor Lattimore
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular ο¬nite-horizon MDPs. |
| Researcher Affiliation | Industry | Botao Hao Deepmind EMAIL Tor Lattimore Deepmind EMAIL |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not use datasets for empirical evaluation. Therefore, no information about public dataset availability is provided. |
| Dataset Splits | No | The paper is theoretical and does not report on experiments involving dataset splits. Therefore, no validation split information is provided. |
| Hardware Specification | No | The paper is theoretical and does not report on experiments. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not report on experiments requiring specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not report on experiments. Therefore, no experimental setup details like hyperparameters or training configurations are provided. |