Toward the Fundamental Limits of Imitation Learning

Authors: Nived Rajaraman, Lin Yang, Jiantao Jiao, Kannan Ramchandran

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we focus on understanding the minimax statistical limits of IL in episodic Markov Decision Processes (MDPs). We first consider the setting where the learner is provided a dataset of N expert trajectories ahead of time, and cannot interact with the MDP. Here, we show that the policy which mimics the expert whenever possible is in expectation |S|H2 log(N) N suboptimal compared to the value of the expert, even when the expert plays a stochastic policy. Here S is the state space and H is the length of the episode. Furthermore, we establish a suboptimality lower bound of |S|H2/N which applies even if the expert is constrained to be deterministic, or if the learner is allowed to actively query the expert at visited states while interacting with the MDP for N episodes. To our knowledge, this is the first algorithm with suboptimality having no dependence on the number of actions, under no additional assumptions. We then propose a novel algorithm based on minimum-distance functionals in the setting where the transition model is given and the expert is deterministic. The algorithm is suboptimal by |S|H3/2/N, matching our lower bound up to a H factor, and breaks the O(H2) error compounding barrier of IL.
Researcher Affiliation Academia Nived Rajaraman University of California, Berkeley nived.rajaraman@berkeley.edu Lin F. Yang University of California, Los Angeles linyang@ee.ucla.edu Jiantao Jiao University of California, Berkeley jiantao@eecs.berkeley.edu Kannan Ramchandran University of California, Berkeley kannanr@eecs.berkeley.edu
Pseudocode Yes Algorithm 1 MIMIC-EMP ... Algorithm 2 MIMIC-MD
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper focuses on theoretical analysis and does not use datasets for training. Thus, it does not provide concrete access information for a publicly available or open dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments that would involve dataset splits for training, validation, or testing. Thus, it does not provide specific dataset split information.
Hardware Specification No The paper is theoretical and does not describe empirical experiments, thus it does not provide specific hardware details used for running experiments.
Software Dependencies No The paper is theoretical and does not describe specific software implementations or dependencies with version numbers.
Experiment Setup No The paper focuses on theoretical analysis and algorithm design, not empirical experimentation. Therefore, it does not contain specific experimental setup details like hyperparameter values or training configurations.