CTIN: Robust Contextual Transformer Network for Inertial Navigation
Authors: Bingbing Rao, Ehsan Kazemi, Yifan Ding, Devu M Shila, Frank M Tucker, Liqiang Wang5413-5421
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments over a wide range of inertial datasets (e.g., RIDI, Ox IOD, Ro NIN, IDOL, and our own), CTIN is very robust and outperforms state-of-the-art models. |
| Researcher Affiliation | Collaboration | 1 Department of Computer Science, University of Central Florida, Orlando, FL, USA 2 Unknot.id Inc., Orlando, FL, USA 3 U.S. Army CCDC SC, Orlando, FL, USA |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | As shown in Table 1, all selected datasets with rich motion contexts (e.g., handheld, pocket) are collected by multiple subjects using two devices: one is to collect IMU measurements and the other provides ground truth (i.e., position). [Table 1 lists RIDI, Ox IOD, Ro NIN, IDOL, and CTIN datasets, with citations to relevant papers for RIDI, Ro NIN, and IDOL in the text and references, e.g., RIDI (Yan, Shan, and Furukawa 2018), Ro NIN (Herath, Yan, and Furukawa 2020), IDOL (Sun, Melamed, and Kitani 2021)] |
| Dataset Splits | Yes | All datasets are split into training, validation, and testing datasets in a ratio of 8:1:1. |
| Hardware Specification | Yes | To be consistent with the experimental settings of baselines, we conduct both training and testing on NVIDIA RTX 2080Ti GPU. |
| Software Dependencies | Yes | CTIN was implemented in Pytorch 1.7.1 (Paszke et al. 2019) and trained using Adam optimizer (Kingma and Ba 2014). |
| Experiment Setup | Yes | CTIN was implemented in Pytorch 1.7.1 (Paszke et al. 2019) and trained using Adam optimizer (Kingma and Ba 2014). During training, early stopping with 30 patience (Prechelt 1998; Wang et al. 2020) is leveraged to avoid overfitting according to model performance on the validation dataset. |