Reward Identification in Inverse Reinforcement Learning

Authors: Kuno Kim, Shivam Garg, Kirankumar Shiragur, Stefano Ermon

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we formalize the reward identification problem in IRL and study how identifiability relates to properties of the MDP model. For deterministic MDP models with the Max Ent RL objective, we prove necessary and sufficient conditions for identifiability. Building on these results, we present efficient algorithms for testing whether or not an MDP model is identifiable.
Researcher Affiliation Academia 1Department of Computer Science, Stanford University, Palo Alto, USA.
Pseudocode Yes Algorithm 1 Strong Identifiability Test for MDP models with Strongly Connected Domain Graphs Algorithm 2 Strong Identifiability Sufficiency Test for General MDP models
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No This paper is theoretical and does not conduct experiments on datasets, therefore it does not mention public dataset availability.
Dataset Splits No This paper is theoretical and does not conduct experiments on datasets, therefore it does not provide information about training/test/validation splits.
Hardware Specification No This paper is theoretical and does not report on experiments, thus it does not provide hardware specifications.
Software Dependencies No This paper is theoretical and does not report on software dependencies with specific version numbers.
Experiment Setup No This paper is theoretical and does not report on experiments, thus it does not provide details about experimental setup or hyperparameters.