Learning Integrated Holism-Landmark Representations for Long-Term Loop Closure Detection
Authors: Fei Han, Hua Wang, Hao Zhang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted using two large-scale public benchmark data sets, in which the promising performances have demonstrated the effectiveness of the proposed approach. Extensive experiments are performed to validate and evaluate the performance of our HALI approach over long-term loop closure detection, using two large-scale public datasets recorded in different long-term situations, including: CMU-VL (scenarios in different months) and Nordland (scenarios in different seasons) data sets. Both qualitative and quantitative evaluations are conducted to evaluate the performance of HALI. In addition, several baseline and recent methods are compared in each experiment, including the BRIEF-GIST (S underhauf and Protzel 2011), Normalized Gradients (Norm G) of grayscale images (used in Seq SLAM (Milford and Wyeth 2012)), and techniques based only upon color, LBP, HOG, SURF, CNN features. To demonstrated the performance improvement truly resulted from HALI, the simple image-based matching is intentionally implemented for location matching. Throughout the experiments, the hyperparameter K = 2 was used, which is resulted from our sensitivity analysis for the hyperparameter value selection (presented at the end of this section). The projection matrix W is learned on patches containing landmarks or semantic objects coming from a separate held-back subset of the datasets during the training phase; then the learned projection is applied during the testing phase over a separate, previously unseen testing instances for validation and evaluation. For quantitative evaluation and comparison, following previous studies (Sunderhauf et al. 2015; Zhang, Han, and Wang 2016), we use precision-recall curves as a metric, which shows the tradeoff between precision and recall for different threshold. |
| Researcher Affiliation | Academia | Fei Han, Hua Wang, Hao Zhang Department of Computer Science, Colorado School of Mines, Golden, CO 80401 fhan@mines.edu, huawangcs@gmail.com, hzhang@mines.edu |
| Pseudocode | Yes | Algorithm 1: Solve the general optimization problem in Eq. (5). Algorithm 2: Solve the proposed objective in Eq. (4). Algorithm 3: Solve the optimization problem in Eq. (4). |
| Open Source Code | No | The paper does not provide an explicit statement or a link to the source code for the described methodology. |
| Open Datasets | Yes | Extensive experiments are performed to validate and evaluate the performance of our HALI approach over long-term loop closure detection, using two large-scale public datasets recorded in different long-term situations, including: CMU-VL (scenarios in different months) and Nordland (scenarios in different seasons) data sets. The CMU Visual Localization (CMU-VL) dataset (Badino, Huber, and Kanade 2012) was recorded from two monocular cameras installed on a vehicle that traveled the same route five times around Pittsburgh areas across different months with a variety of weather, environmental, and climatological conditions. The large-scale Nordland dataset (S underhauf, Neubert, and Protzel 2013) was collected in four different seasons from a ten-hour long journey of a train. |
| Dataset Splits | No | The paper mentions 'The projection matrix W is learned on patches containing landmarks or semantic objects coming from a separate held-back subset of the datasets during the training phase; then the learned projection is applied during the testing phase over a separate, previously unseen testing instances for validation and evaluation.' However, it does not specify exact percentages, sample counts, or detailed methodology for these splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions various feature extraction techniques and models (e.g., HOG, SIFT, ConvNet) but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would allow for reproducible setup. |
| Experiment Setup | Yes | Throughout the experiments, the hyperparameter K = 2 was used, which is resulted from our sensitivity analysis for the hyperparameter value selection (presented at the end of this section). |