Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback

Authors: Haolin Liu, Chen-Yu Wei, Julian Zimmert

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback, without prior knowledge on transitions or access to simulators. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, ensures a regret of e O( K), where K is the number of episodes. This is the first result with the optimal K dependence in the considered setting. The second algorithm, which is based on the policy optimization framework, guarantees a regret of e O(K 3/4) and is computationally efficient. Both our results significantly improve over the state-of-the-art: a computationally inefficient algorithm by Kong et al. (2023) with e O(K 4/5 + poly(1/λmin)) regret, for some problem-dependent constant λmin that can be arbitrarily close to zero, and a computationally efficient algorithm by Sherman et al. (2023b) with e O(K 6/7) regret.
Researcher Affiliation Collaboration Haolin Liu1 , Chen-Yu Wei1 , Julian Zimmert2 1 Department of Computer Science, University of Viginia, 2 Google Research {srs8rh, kfw6en}@virginia.edu {zimmert}@google.com
Pseudocode Yes Algorithm 1 Est OM(π, (Dh)H h=1) (Estimate Occupancy Measure) ... Algorithm 2 Exponential Weights ... Algorithm 3 Logdet FTRL with initial exploration ... Algorithm 4 OBME (Dk,h)H h=1, (bΣk,h)H h=1, (Zh)H h=1 (Optimistic Bonus Matrix Estimation) ... Algorithm 5 Initial Pure Exploration (Algorithm 2 of Sherman et al. (2023a))
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets No The paper does not describe the use of any specific dataset for empirical evaluation, nor does it provide concrete access information for a publicly available or open dataset.
Dataset Splits No The paper does not describe any specific dataset split information (percentages, counts, or references to predefined splits) needed for reproduction.
Hardware Specification No The paper is theoretical and does not describe any specific hardware details used for running experiments.
Software Dependencies No The paper is theoretical and does not mention specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and does not describe experimental setup details like hyperparameters or training configurations.