Intent Prediction and Trajectory Forecasting via Predictive Inverse Linear-Quadratic Regulation
Authors: Mathew Monfort, Anqi Liu, Brian Ziebart
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We employ the Cornell Activity Dataset (CAD-120) (Koppula and Saxena 2013) in order to analyze and evaluate our predictive inverse linear-quadratic regulation model. We compare the predictive accuracy of our method against the aforementioned techniques using the averaged mean log-loss (Begleiter, El-yaniv, and Yona 2004; Nguyen and Guo 2007). This allows us to compare the likelihood of the demonstrated trajectory to the distance and activity measures previously discussed. As we show in Figure 1, the presented LQR method outperforms the other predictive techniques. This is mainly due to the incorporation of our sophisticated LQR likelihood model for the demonstrated sequence trajectories with the prior target distribution. |
| Researcher Affiliation | Academia | Mathew Monfort Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 mmonfo2@uic.edu Anqi Liu Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 aliu33@uic.edu Brian D. Ziebart Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 bziebart@uic.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It describes equations and mathematical formulations but not in a pseudocode format. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It mentions using the Armadillo C++ linear algebra library but does not state that their own code is open-source or provide a link. |
| Open Datasets | Yes | We employ the Cornell Activity Dataset (CAD-120) (Koppula and Saxena 2013) in order to analyze and evaluate our predictive inverse linear-quadratic regulation model. |
| Dataset Splits | Yes | The data is randomly divided into a training set and a test set. The test set consists of 10% of the demonstrated sequences with at least one sequence belonging to each sub-activity. The model is then trained on the training set and evaluated using the test set. |
| Hardware Specification | Yes | These execution times were collected on an Intel i7-3720QM CPU at 2.60GHz with 16 GB of RAM. |
| Software Dependencies | Yes | In order to further improve the efficiency of our computation we employ the Armadillo C++ linear algebra library for fast linear computation (Sanderson 2010). |
| Experiment Setup | Yes | Our LQR model uses two separate parameter matrices, M and Mf. We employ accelerated stochastic gradient descent with an adaptive (adagrad) learning rate and L1 regularization (Duchi, Hazan, and Singer 2011; Sutskever et al. 2013) on both parameter matrices simultaneously. This regularized approach prevents overfitting of the parameter matrices for sub-activities with a low number of demonstrated trajectories. |