Reliable Inlier Evaluation for Unsupervised Point Cloud Registration

Authors: Yaqi Shen, Le Hui, Haobo Jiang, Jin Xie, Jian Yang2198-2206

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on extensive datasets demonstrate that our unsupervised point cloud registration method can yield comparable performance. We evaluate our method on extensive benchmark datasets, including Model Net40 (Wu et al. 2015), 7Scenes (Shotton et al. 2013), ICL-NUIM (Choi, Zhou, and Koltun 2015), and KITTI (Geiger, Lenz, and Urtasun 2012) and the experimental results verify the effectiveness of our method. We conduct ablation study on two key components of our model: Matching Map Refinement (MMR) and Inlier Evaluation (IE) modules.
Researcher Affiliation Academia PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {syq, le.hui, jiang.hao.bo, csjxie, csjyang}@njust.edu.cn
Pseudocode No No pseudocode or algorithm blocks found.
Open Source Code No No explicit statement about providing open-source code or a link to a repository for the described methodology.
Open Datasets Yes We evaluate our method on Model Net40 (Wu et al. 2015), 7Scenes (Shotton et al. 2013), ICL-NUIM (Choi, Zhou, and Koltun 2015) and KITTI odometry datasets (Geiger, Lenz, and Urtasun 2012).
Dataset Splits Yes And the KITTI odometry dataset consists of 11 sequences with ground truth pose, we use Sequence 00-05 for training, 06-07 for validation, and 08-10 for testing.
Hardware Specification Yes We calculate the inference time with an Intel I5-8400 CPU and Geforce RTX 2080Ti GPU.
Software Dependencies No Our model is implemented in Pytorch. No specific version numbers for PyTorch or other software dependencies are provided.
Experiment Setup Yes We optimize the parameters with the ADAM optimizer. The initial learning rate is 0.001. For Model Net40 and KITTI, we train the network for 50 epochs and multiply the learning rate by 0.7 at epoch 25. For indoor scenes, we multiply the learning rate by 0.7 at epochs 25, 50, 75 and train the network for 100 epochs.