Consistent Right-Invariant Fixed-Lag Smoother with Application to Visual Inertial SLAM

Authors: Jianzhu Huai, Yukai Lin, Yuan Zhuang, Min Shi6084-6092

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By applying the proposed FLS to the monocular visual inertial simultaneous localization and mapping (SLAM) problem, we confirm that the method consistently estimates covariance similarly to a batch smoother in simulation and that our method achieved comparable accuracy as traditional FLSs on real data.
Researcher Affiliation Academia Jianzhu Huai 1, Yukai Lin 2, Yuan Zhuang , 1, Min Shi3 1Wuhan University 2ETH Zurich 3Washington University, St. Louis
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper mentions using and citing open-source libraries like GTSAM and Kimera (e.g., "Rosinol et al. 2020. Kimera: An open-source library...") but does not state that the authors' own implementation or code for the described methodology is publicly released or available.
Open Datasets Yes Furthermore, the practicality of the proposed right invariant FLS is verified with the Eu Ro C benchmark (Burri et al. 2016). A scene with point landmarks distributed on four walls was simulated.
Dataset Splits No The paper describes simulation and real-world data testing, but it does not specify explicit training/validation/test dataset splits in the conventional sense (e.g., percentages or sample counts for each split) for its experiments. It evaluates the performance of the estimator over runs or on benchmark sequences.
Hardware Specification No The paper discusses a "monocular camera IMU platform" as part of the system being studied but does not specify the hardware (e.g., GPU, CPU models, memory) used to run the experimental simulations or process the real-time data.
Software Dependencies No The paper mentions using "the Incremental Fixed Lag Smoother in GTSAM (Dellaert 2012)" and "i SAM2 (Kaess et al. 2012)" but does not provide specific version numbers for these software packages or other dependencies.
Experiment Setup Yes Simulation Setup A scene with point landmarks distributed on four walls was simulated. A monocular camera IMU platform traversed the scene for five minutes with a torus trajectory (Fig. 1). The platform moved at an average velocity 2.30 m/s. The camera captured images of size 752 480 at 10Hz. The image observations were corrupted by white Gaussian noise of 1 pixel standard deviation at each direction. The simulated inertial measurements were sampled at f=100 Hz, corrupted by random walk biases and additive white noise. Discrete noise samples were drawn from Gaussian distributions tabulated in Table 1. Estimator Setup The proposed FLS was implemented with the Incremental Fixed Lag Smoother in GTSAM (Dellaert 2012) which wraps the i SAM2 (Kaess et al. 2012) method. By setting the time horizon to a large value, it turns into the i SAM2 which gives results very close to a batch solution (Forster et al. 2017). Also, GTSAM provides a Batch Fixed Lag Smoother wrapping a Levenberg Marquardt solver which ensures consistency by locking variables in the marginalization factor. We compared several estimators, the incremental FLS (Inc. FLS), the batch FLS, i SAM2, and the proposed FLS with the right invariant error (RI-FLS). The first three estimators used the error state defined in (Forster et al. 2017). Except for i SAM2, the other estimators adopted a time horizon of 1 second. All estimators were initialized with the true pose but a noisy velocity estimate affected by noise of Gaussian distribution N(0, 0.052I3 m2/s4). Each estimator ran 100 times, and only successful runs (with the error in position 100 m at the end), were used to compute the error metrics.