Learning from Demonstrations with High-Level Side Information
Authors: Min Wen, Ivan Papusha, Ufuk Topcu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report numerical results on a navigation example, in which policies are learned with MLIRL, MLIRL with a specification automaton, and with our own algorithm. We show that the learned policy benefits from both the construction of product automaton and the evaluation with co-safe LTL formula, because it attains higher probabilities of successfully implementing the task, and provides formal guarantees on task completion even in regions of state space not covered by the expert examples. |
| Researcher Affiliation | Academia | Min Wen University of Pennsylvania wenm@seas.upenn.edu Ivan Papusha University of Texas at Austin ipapusha@utexas.edu Ufuk Topcu University of Texas at Austin utopcu@utexas.edu |
| Pseudocode | No | The paper describes algorithmic steps and equations in text, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any links to open-source code or state that code for the described methodology is publicly available. |
| Open Datasets | No | The paper uses a custom '10-by-10 grid world map' and 'demonstrated trajectories' for its experiments. There is no indication that this dataset is publicly available, nor is a link or formal citation provided for access. |
| Dataset Splits | No | The paper does not provide specific training, validation, or test dataset splits for its demonstrated trajectories. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies or versions (e.g., programming languages, libraries, solvers with version numbers) used for the experiments. |
| Experiment Setup | Yes | The paper specifies experimental settings such as the trade-off parameter 'µ = 0.01' for the augmented objective function Jside, and details the 'Design of features in Case 2 and 3' in Table 1. |