Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Improved Regret Bounds for Linear Adversarial MDPs via Linear Optimization

Authors: Fang Kong, XiangCheng Zhang, Baoxiang Wang, Shuai Li

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we propose a novel explore-exploit algorithm framework and investigate the problem with a new view, which reduces linear MDP into linear optimization by subtly setting the feature maps of the bandit arms of linear optimization. This new technique, under an exploratory assumption, yields an improved bound of O(K4/5) for linear adversarial MDP without access to a transition simulator. The new view could be of independent interest for solving other MDP problems that possess a linear structure. This section provides the regret guarantees for the proposed Algorithm 1 as well as a proof sketch.
Researcher Affiliation Academia Fang Kong EMAIL Shanghai Jiao Tong University; Xiangcheng Zhang EMAIL Tsinghua University; Baoxiang Wang EMAIL The Chinese University of Hong Kong, Shenzhen; Shuai Li EMAIL Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1 Geometric Hedge for Linear Adversarial MDP Policies (GLAP); Algorithm 2 FEATURE VISTATION ESTIMATION ORACLE
Open Source Code No The paper does not provide any concrete access to source code, such as a repository link, explicit code release statement, or code in supplementary materials.
Open Datasets No The paper is a theoretical work that discusses algorithms and regret bounds for linear adversarial MDPs; it does not describe experiments conducted on specific datasets, nor does it provide access information for any open datasets.
Dataset Splits No The paper is a theoretical work and does not use datasets for empirical evaluation, thus there is no mention of dataset splits.
Hardware Specification No The paper is a theoretical work and does not describe any experiments that would require hardware specifications.
Software Dependencies No The paper is a theoretical work and does not describe any specific software dependencies with version numbers required to replicate experiments or implementations.
Experiment Setup No The paper is a theoretical work focusing on regret bounds and algorithms; it does not describe experimental setups, hyperparameters, or training configurations.