IDOL: Inertial Deep Orientation-Estimation and Localization

Authors: Scott Sun, Dennis Melamed, Kris Kitani6128-6137

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed method outperforms state-of-the-art methods in both orientation and position error on a large dataset we constructed that contains 20 hours of pedestrian motion across 3 buildings and 15 subjects.
Researcher Affiliation Academia Scott Sun, Dennis Melamed, Kris Kitani Carnegie Mellon University
Pseudocode No The paper does not include a section or figure explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code and data are available at https://github.com/Klab CMU/IDOL.
Open Datasets Yes Code and data are available at https://github.com/Klab CMU/IDOL. We collected 20 hours of human motion in 3 different buildings of varying shapes and sizes.
Dataset Splits No The paper states that each building is separately trained and tested and mentions 'Known subjects (2.4hr) present in train split; unknown (2.2hr) were not' but does not specify explicit percentages or counts for training, validation, and test splits for the overall dataset.
Hardware Specification Yes We implemented our model in Pytorch 1.5 (Paszke et al. 2019) and train it using the Adam optimizer (Kingma and Ba 2015) on an Nvidia RTX 2080Ti GPU. ... Using only an AMD Ryzen Threadripper 1920x CPU, the forward inference time is approx. 65ms for 1s of data (100 samples)...
Software Dependencies Yes We implemented our model in Pytorch 1.5 (Paszke et al. 2019)
Experiment Setup Yes The orientation network is first individually trained using a fixed seed and a learning rate of 0.0005. Then, using these initialized weights, the position network is attached and then trained using a learning rate of 0.001. We use a batch size of 64, with the network reaching convergence within 20 epochs.