IONet: Learning to Cure the Curse of Drift in Inertial Odometry

Authors: Changhao Chen, Xiaoxuan Lu, Andrew Markham, Niki Trigoni

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We collected a large dataset for training and testing, and conducted extensive experiments across different attachments, users/devices and new environment, whose results outperform traditional SINS and PDR mechanisms.
Researcher Affiliation Academia Changhao Chen, Xiaoxuan Lu, Andrew Markham, Niki Trigoni Department of Computer Science, University of Oxford, United Kingdom Email: {firstname.lastname}@cs.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It includes mathematical equations and block diagrams.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. There is no explicit code release statement, repository link, or mention of code in supplementary materials.
Open Datasets No There are no public datasets for indoor localization using phone-based IMU. We collected data ourselves with pedestrian walking inside a room installed with an optical motion capture system (Vicon) (Vicon 2017), providing very high-precise full pose reference (0.01m for location, 0.1 degree for orientation) for our experimental device.
Dataset Splits No The paper describes data collection for training and testing (new users, devices, environments) but does not provide specific numerical dataset split information (e.g., percentages or exact sample counts) for train, validation, and test sets from a single dataset. It mentions 'validation results' in Figure 8 but does not detail the split percentage.
Hardware Specification Yes We implemented our model on the public available Tensor Flow framework, and ran training process on a NVIDIA TITAN X GPU.
Software Dependencies No The paper mentions 'Tensor Flow framework' but does not provide a specific version number for TensorFlow or any other software dependencies needed to replicate the experiment.
Experiment Setup Yes where a window length of 200 frames (2 s) is used here. Each layer has 96 hidden nodes. IMU measurements are divided into independent windows with a stride of 10 frames (0.1s). During training, we used Adam, a first-order gradient-based optimizer (Kingma and Ba 2015) with a learning rate of 0.0015. To prevent our neural networks from overfitting, we gathered data with abundant moving characteristics, and adopted Dropout (Srivastava et al. 2014) in each LSTM layer, randomly dropping 25% units from neural networks during training.