Rate-Optimal Policy Optimization for Linear Markov Decision Processes

Authors: Uri Sherman, Alon Cohen, Tomer Koren, Yishay Mansour

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study regret minimization in online episodic linear Markov Decision Processes, and propose a policy optimization algorithm that is computationally efficient, and obtains rate optimal e O(K) regret where K denotes the number of episodes. Our work is the first to establish the optimal rate (in terms of K) of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee was previously known. The paper focuses on theoretical analysis, proving regret bounds (e.g., Theorem 1, Lemma 3, Lemma 4) and presenting algorithms (Algorithm 1 and 2) without mentioning empirical evaluation on datasets, hardware, or performance metrics from experiments.
Researcher Affiliation Collaboration 1Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel 2Google Research, Tel Aviv, Israel.
Pseudocode Yes Algorithm 1 Optimistic PO for Linear MDPs; Algorithm 2 Reward Free Warmup
Open Source Code No The paper does not include any statement about releasing source code or provide links to code repositories for the described methodology.
Open Datasets No This is a theoretical paper focused on regret minimization in linear MDPs, defining problem setups (Adversarial, Stochastic) without referring to specific named or publicly available datasets for training, validation, or testing.
Dataset Splits No This is a theoretical paper and does not mention any dataset splits (training, validation, test) for reproducibility.
Hardware Specification No This is a theoretical paper and does not mention any hardware used for experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No This is a theoretical paper and does not describe a concrete experimental setup with hyperparameters or training configurations.