CoFiNet: Reliable Coarse-to-fine Correspondences for Robust PointCloud Registration
Authors: Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, Slobodan Ilic
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluation of Co Fi Net on both indoor and outdoor standard benchmarks shows our superiority over existing methods. Extensive experiments are conducted on both indoor and outdoor benchmarks to show our superiority. |
| Researcher Affiliation | Collaboration | 1 Technical University of Munich 2 National University of Defense Technology 3 Siemens AG, München |
| Pseudocode | No | The paper describes methods through text and diagrams (Figure 1, Figure 2) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | [Code] |
| Open Datasets | Yes | We evaluate our model on three challenging public benchmarks, including both indoor and outdoor scenarios. Following [12], for indoor scenes, we evaluate our model on both 3DMatch [6], where point cloud pairs share > 30% overlap, and 3DLo Match [12], where point cloud pairs have 10% ~30% overlap. In line with existing works [10, 12], we evaluate for outdoor scenes on odometry KITTI [20]. |
| Dataset Splits | No | The paper mentions using standard benchmarks (3DMatch, 3DLo Match, KITTI) but does not explicitly provide details about the training, validation, or test dataset splits used for these benchmarks within the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU or CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions various algorithms and models, but does not provide specific software dependencies or library version numbers (e.g., Python, PyTorch, or CUDA versions) used for the implementation. |
| Experiment Setup | Yes | Our total loss L = Lc + λLf is calculated as the weighted sum of the coarse-scale Lc and the fine-scale Lf, where λ is used to balance the terms. To guarantee a higher recall, we adopt a threshold τc for likely correspondences whose confidence scores are above τc. We define the obtained coarse node correspondence set as C = {(P X(i ), P Y(j ))}, with |C | = c , where | | denotes the set cardinality. Furthermore, we set the other threshold τm to guarantee that c τm. training for 20 epochs compared to the best performing model [12], which uses over 20M parameters and is trained for 150 epochs. |