Backprop KF: Learning Discriminative Deterministic State Estimators

Authors: Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on synthetic tracking task with raw image inputs and on the visual odometry task in the KITTI dataset. The results show significant improvement over both standard generative approaches and regular recurrent neural networks. In this section, we compare our deterministic discriminatively trained state estimator with a set of alternative methods, including simple feedforward convolutional networks, piecewise-trained Kalman filter, and fully general LSTM models. We evaluate these models on two tasks that require processing of raw image input: synthetic task of tracking a red disk in the presence of clutter and severe occlusion; and the KITTI visual odometry task [8].
Researcher Affiliation Academia Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel {haarnoja, anuragajay, svlevine, pabbeel}@berkeley.edu Department of Computer Science, University of California, Berkeley
Pseudocode No The paper includes diagrams of computation graphs (e.g., Figure 3) but no formal pseudocode or algorithm blocks.
Open Source Code No No explicit statement or link indicating the release of source code for the described methodology was found.
Open Datasets Yes We evaluate our approach on synthetic tracking task with raw image inputs and on the visual odometry task in the KITTI dataset [8].
Dataset Splits Yes We evaluated each model using 11-fold cross-validation, and we report the average errors of the held-out trajectories over the folds. We trained the models by randomly sampling subsequences of 100 time steps.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments are mentioned.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names with versions) are explicitly mentioned.
Experiment Setup Yes We trained the models by randomly sampling subsequences of 100 time steps. For training the Kalman filter variants, we used a simplified state-space model with three of the state variables corresponding to the vehicle s 2D pose (two spatial coordinates and heading) and two for the forward and angular velocities. ...trained a feedforward network, consisting of four convolutional and two fully connected layers and having approximately half a million parameters, to estimate the velocities from pairs of images at consecutive time steps.