Multi-scale Consistency for Robust 3D Registration via Hierarchical Sinkhorn Tree

Authors: Chengwei Ren, Yifan Feng, Weixiang Zhang, Xiao-Ping (Steven) Zhang, Yue Gao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate HST consistently outperforms the state-of-the-art methods on both indoor and outdoor benchmarks.
Researcher Affiliation Academia {1Shenzhen Ubiquitous Data Enabling Key Lab, 2Shenzhen International Graduate School, 3BNRist, THUIBCS, School of Software}, Tsinghua University
Pseudocode No The paper describes the algorithm steps in text and figures (e.g., Figure 2 for overview, Section 3.2 for Hierarchical Sinkhorn Tree), but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: Please see the supplemental material Sec. A.4.
Open Datasets Yes 3DMatch [1] contains 62 scenes collected from SUN3D [2], 7-Scenes [3], RGBD Scenes v.2 [4], Analysis-by-Synthesis [5], Bundle Fusion [6], and Halbel et al. [7] among which 46 scenes are used for training, 8 scenes for validation and 8 scenes for testing. KITTI [11] ... adopt 0-5 for training, 6-7 for validation and 8-10 for testing.
Dataset Splits Yes 3DMatch [1] ... 46 scenes are used for training, 8 scenes for validation and 8 scenes for testing. KITTI [11] ... adopt 0-5 for training, 6-7 for validation and 8-10 for testing.
Hardware Specification Yes Our proposed method is implemented and evaluated in Pytorch [15] and we train it on a single RTX 3090 GPU with an AMD EPYC 9654 CPU.
Software Dependencies No Our proposed method is implemented and evaluated in Pytorch [15] and we train it on a single RTX 3090 GPU with an AMD EPYC 9654 CPU. While Pytorch is mentioned, a specific version number is not provided in the text.
Experiment Setup Yes The network is trained with Adam optimizer [16] for 40 epochs on 3DMatch and 80 epochs on KITTI with batch size of 1 and weight decay of 10 6. The learning rate initializes from 10 4 and decays exponentially by 0.05 every epoch on 3DMatch and every 4 epochs on KITTI, respectively.