When is Agnostic Reinforcement Learning Statistically Tractable?

Authors: Zeyu Jia, Gene Li, Alexander Rakhlin, Ayush Sekhari, Nati Srebro

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the problem of agnostic PAC reinforcement learning (RL): given a policy class Π, how many rounds of interaction with an unknown MDP (with a potentially large state and action space) are required to learn an ε-suboptimal policy with respect to Π? Towards that end, we introduce a new complexity measure, called the spanning capacity, that depends solely on the set Π and is independent of the MDP dynamics. With a generative model, we show that for any policy class Π, bounded spanning capacity characterizes PAC learnability. However, for online RL, the situation is more subtle. We show there exists a policy class Π with a bounded spanning capacity that requires a superpolynomial number of samples to learn. This reveals a surprising separation for agnostic learnability between generative access and online access models (as well as between deterministic/stochastic MDPs under online access). On the positive side, we identify an additional sunflower structure, which in conjunction with bounded spanning capacity enables statistically efficient online RL via a new algorithm called POPLER, which takes inspiration from classical importance sampling methods as well as techniques for reachable-state identification and policy evaluation in reward-free exploration.
Researcher Affiliation Academia Zeyu Jia1 Gene Li2 Alexander Rakhlin1 Ayush Sekhari1 Nathan Srebro2 1MIT, 2TTIC
Pseudocode Yes We provide a new agnostic PAC RL algorithm called POPLER that is statistically efficient whenever the given policy class has both bounded spanning capacity and the sunflower property (Section 6). POPLER (Algorithm 1) takes as input a policy class Π, as well as sets Πcore and {Sπ}π Π, which can be computed beforehand by enumeration. Algorithm 1 Policy OPtimization by Learning ε-Reachable States (POPLER) Require: Policy class Π, Sets Πcore and {Sπ}π Π, Parameters K, D, n1, n2, ε, δ.
Open Source Code No The paper does not contain any statement about releasing open-source code for the described methodology, nor does it provide any links to a code repository.
Open Datasets No The paper is theoretical and does not report on empirical experiments or use datasets.
Dataset Splits No The paper is theoretical and does not report on empirical experiments or specify dataset splits.
Hardware Specification No The paper is theoretical and does not report on empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not report on empirical experiments, thus no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not report on empirical experiments, thus no specific experimental setup details or hyperparameters are provided.