Unsupervised Monocular Visual-inertial Odometry Network

Authors: Peng Wei, Guoliang Hua, Weibo Huang, Fanyang Meng, Hong Liu

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments have been conducted on KITTI and Malaga datasets to demonstrate the superiority of Un VIO over other state-of-the-art VO / VIO methods.
Researcher Affiliation Academia 1Key Laboratory of Machine Perception, Peking University, Shenzhen Graduate School, China 2Peng Cheng Laboratory, Shenzhen, China {weapon, glhua, weibohuang, hongliu}@pku.edu.cn, mengfy@pcl.ac.cn
Pseudocode No The paper describes its methods in text and with diagrams (e.g., Figure 1), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes The codes are open-source1. 1https://github.com/Ironbrotherstyle/Un VIO
Open Datasets Yes KITTI Dataset. KITTI dataset [Geiger et al., 2012] serves as a prevalent driving dataset... Malaga Dataset. Malaga [Blanco-Claraco et al., 2014] is an outdoor dataset.
Dataset Splits No The paper specifies 'Seqs 00-08 excluding 03 are adopted for training and 09-10 are utilized for testing' for KITTI, and similar splits for Malaga. It does not explicitly mention a distinct validation set for model tuning.
Hardware Specification Yes All the models are implemented by using the Pytorch framework on a computer equipped with an Nvidia Ge Force GTX1080 Ti GPU.
Software Dependencies No The paper states 'All the models are implemented by using the Pytorch framework' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Adam optimizer with learning rate 10 4, β1 = 0.9, β2 = 0.999 is utilized. Images for training on both datasets are resized to 832 256, meanwhile, the IMU samples n is set to 11. The training process converges after about 100000 iterations with a batch size of 4. Besides, the length of training sequence s and window size w are 5 and 3 respectively in our experiment. The weights for loss functions are empirically given as: α1 = 1, α2 = 0.1, α3 = 0.1, α4 = 0.1, λ1 = 0.15, λ2 = 0.85.