Soft Robust MDPs and Risk-Sensitive MDPs: Equivalence, Policy Gradient, and Sample Complexity
Authors: Runyu Zhang, Yang Hu, Na Li
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper introduces a new formulation for risk-sensitive MDPs, which assesses risk in a slightly different manner compared to the classical Markov risk measure [71], and establishes its equivalence with a class of soft robust MDP (RMDP) problems, including the standard RMDP as a special case. Leveraging this equivalence, we further derive the policy gradient theorem for both problems, proving gradient domination and global convergence of the exact policy gradient method under the tabular setting with direct parameterization. ... We also propose a sample-based offline learning algorithm, namely the robust fitted-Z iteration (RFZI), for a specific soft RMDP problem with a KL-divergence regularization term (or equivalently the risk-sensitive MDP with an entropy risk measure). We showcase its streamlined design and less stringent assumptions due to the equivalence and analyze its sample complexity. Due to space limit, we defer a detailed literature review and numerical simulations to the appendix. |
| Researcher Affiliation | Academia | Runyu (Cathy) Zhang Harvard University runyuzhang@fas.harvard.edu Yang Hu Harvard University yanghu@g.harvard.edu Na Li Harvard University nali@seas.harvard.edu |
| Pseudocode | Yes | Algorithm 1 Robust Fitted Z Iteration (RFZI) 1: Input: Offline dataset D = (si, ai, ri, s i)N i=1, function class F. 2: Initialize: Z0 = 1 F 3: for k = 0, . . . , K 1 do 4: Update Zk+1 = argmin Z F b L(Z, Zk). 5: end for 6: Output: πK = argmaxa r(s, a) γβ 1 log ZK(s, a) |
| Open Source Code | No | The paper states 'Due to space limit, we defer a detailed literature review and numerical simulations to the appendix.' and does not provide any specific repository links (e.g., GitHub, GitLab) or explicit statements such as 'We release our code' or 'The source code for our method is available at...'. |
| Open Datasets | No | The paper states it uses a 'pre-collected dataset D = {si, ai, ri, s i}N i=1' but does not specify a well-known public dataset name, provide a direct URL/DOI, cite a published paper containing the dataset with author/year, or state the dataset is in supplementary material. |
| Dataset Splits | No | The paper does not specify exact split percentages, absolute sample counts, or reference predefined splits for training, validation, and testing data. |
| Hardware Specification | No | The paper does not mention any specific hardware setup, such as GPU or CPU models, TPUs, or detailed cloud resource specifications used for running experiments. |
| Software Dependencies | No | The paper does not list specific software components with their version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1) that would be needed to replicate experiments. |
| Experiment Setup | No | The paper does not contain specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or details on model initialization, dropout rates, or training schedules for an experimental setup. |