Regret Bounds for Information-Directed Reinforcement Learning
Authors: Botao Hao, Tor Lattimore
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We develop novel information-theoretic tools to bound the information ratio and cumulative information gain about the learning target. Our theoretical results shed light on the importance of choosing the learning target such that the practitioners can balance the computation and regret bounds. As a consequence, we derive priorfree Bayesian regret bounds for vanilla-IDS which learns the whole environment under tabular finite-horizon MDPs. |
| Researcher Affiliation | Industry | Botao Hao Deepmind haobotao000@gmail.com Tor Lattimore Deepmind lattimore@google.com |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not use datasets for empirical evaluation. Therefore, no information about public dataset availability is provided. |
| Dataset Splits | No | The paper is theoretical and does not report on experiments involving dataset splits. Therefore, no validation split information is provided. |
| Hardware Specification | No | The paper is theoretical and does not report on experiments. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not report on experiments requiring specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not report on experiments. Therefore, no experimental setup details like hyperparameters or training configurations are provided. |