Follow-the-Perturbed-Leader for Adversarial Markov Decision Processes with Bandit Feedback
Authors: Yan Dai, Haipeng Luo, Liyu Chen
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We consider regret minimization for Adversarial Markov Decision Processes (AMDPs), where the loss functions are changing over time and adversarially chosen, and the learner only observes the losses for the visited state-action pairs (i.e., bandit feedback). While there has been a surge of studies on this problem using Online-Mirror-Descent (OMD) methods, very little is known about the Follow-the-Perturbed-Leader (FTPL) methods, which are usually computationally more efficient and also easier to implement since it only requires solving an offline planning problem. Motivated by this, we take a closer look at FTPL for learning AMDPs, starting from the standard episodic finite-horizon setting. We find some unique and intriguing difficulties in the analysis and propose a workaround to eventually show that FTPL is also able to achieve near-optimal regret bounds in this case. More importantly, we then find two significant applications: First, the analysis of FTPL turns out to be readily generalizable to delayed bandit feedback with order-optimal regret, while OMD methods exhibit extra difficulties (Jin et al., 2022). Second, using FTPL, we also develop the first no-regret algorithm for learning communicating AMDPs in the infinite-horizon setting with bandit feedback and stochastic transitions. |
| Researcher Affiliation | Academia | Yan Dai Tsinghua University yan-dai20@mails.tsinghua.edu.cn Haipeng Luo University of Southern California haipengl@usc.edu Liyu Chen University of Southern California liyuc@usc.edu |
| Pseudocode | Yes | Algorithm 1 FTPL for Episodic AMDPs with Bandit Feedback and Known Transition |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code for the described methodology. It is a theoretical paper as indicated by the authors in the ethics checklist. |
| Open Datasets | No | The paper does not describe any specific datasets used for training or provide access information for any public or open datasets. It is a theoretical paper. |
| Dataset Splits | No | The paper does not discuss dataset splits (train/validation/test) as it is a theoretical work and does not perform empirical evaluation. |
| Hardware Specification | No | The paper does not mention any hardware specifications for running experiments, as it is a theoretical work. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, as it is a theoretical paper presenting algorithms and analysis. |
| Experiment Setup | No | The paper does not provide details on experimental setup, hyperparameters, or training settings, as it is a theoretical paper and does not conduct experiments. |