Towards Theoretical Understanding of Inverse Reinforcement Learning

Authors: Alberto Maria Metelli, Filippo Lazzati, Marcello Restelli

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we make a step towards closing the theory gap of IRL in the case of finite-horizon problems with a generative model. We start by formally introducing the problem of estimating the feasible reward set, the corresponding PAC requirement, and discussing the properties of particular classes of rewards. Then, we provide the first minimax lower bound on the sample complexity for the problem of estimating the feasible reward set of order Ω H3SA δ S , being S and A the number of states and actions respectively, H the horizon, ϵ the desired accuracy, and δ the confidence. We analyze the sample complexity of a uniform sampling strategy (US-IRL), proving a matching upper bound up to logarithmic factors.
Researcher Affiliation Academia 1Dipartimento di Elettronica, Informazione e Bioingegneria. Politecnico di Milano. Milan, Italy.
Pseudocode Yes Algorithm 1. Uniform Sampling-IRL (US-IRL) for time-inhomogeneous (resp. time-homogeneous) transition models.
Open Source Code No The paper does not mention the release of open-source code or provide any links to a code repository.
Open Datasets No The paper is theoretical and does not use or reference any specific dataset for training, validation, or testing.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets with explicit training, validation, or test splits.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware used for computations.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies or version numbers required for reproduction.
Experiment Setup No The paper is theoretical and focuses on mathematical analysis rather than experimental setup details such as hyperparameters or training configurations.