Maximum Likelihood Constraint Inference for Inverse Reinforcement Learning
Authors: Dexter R.R. Scobee, S. Shankar Sastry
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle. |
| Researcher Affiliation | Academia | Dexter R.R. Scobee & S. Shankar Sastry Department of Electrical Engineering and Computer Sciences University of California, Berkeley |
| Pseudocode | Yes | Algorithm 1 Feature Accrual History Calculation; Algorithm 2 Greedy Iterative Constraint Inference |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper mentions using a 'synthetic grid world' and 'recorded data of humans navigating around an obstacle' collected from 16 volunteers. No information, citation, or link is provided to access either of these datasets publicly. |
| Dataset Splits | No | The paper mentions using '100 demonstrations' for the synthetic grid world and 'demonstrations collected from 16 volunteers' for the human obstacle avoidance. However, it does not specify any explicit training, validation, or test splits, nor does it refer to standard splits with citations. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers). |
| Experiment Setup | Yes | The threshold parameter d DKL is chosen to avoid overfitting to the demonstrations, combating the tendency to select additional constraints that may only marginally better align our predictions with the demonstrations. The threshold d DKL = 0.1 achieves a good balance of producing few false positives with sufficient examples while also producing lower KL divergences, and we used this threshold to produce the results in Figures 3 and 5. |