APR: Online Distant Point Cloud Registration through Aggregated Point Cloud Reconstruction

Authors: Quan Liu, Yunsong Zhou, Hongzi Zhu, Shan Chang, Minyi Guo

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments against state-of-the-art (SOTA) feature extractors on KITTI and nu Scenes datasets. Results show that APR outperforms all other extractors by a large margin, increasing average registration recall of SOTA extractors by 7.1% on Lo KITTI and 4.6% on Lo Nu Scenes.
Researcher Affiliation Academia 1Shanghai Jiao Tong University 2Donghua University {liuquan2017,zhouyunsong,hongzi,guo-my}@sjtu.edu.cn, changshan@dhu.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/liu Quan98/APR.
Open Datasets Yes We conduct extensive experiments against state-of-the-art (SOTA) feature extractors on KITTI and nu Scenes datasets. Previously, only close-range registration datasets have been extracted from KITTI [Geiger et al., 2012] and nu Scenes [Caesar et al., 2020]. For alignment of nonkey frames in APG, we use only ground-truth pose from Semantic KITTI [Jens et al., 2019] and nu Scenes [Caesar et al., 2020].
Dataset Splits Yes We distill two low-overlap point cloud datasets, i.e., Lo KITTI and Lo Nu Scenes, with 30% overlap from KITTI and nu Scenes and conduct extensive experiments. Table 2 lists the performance of FCGF+APR(a/s) and Predator+APR(a) on KITTI[5, 20] validation set. As a result, we first pre-train a model on a dataset with lower distance, where d [5, 20]. Then the pre-trained model is further finetuned on [5, d2] (d2 30) to guarantee convergence.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We set default parameters as ψ = 3 and α = 10. Specifically, the asymmetrical decoder is an MLP with flexible hidden layer size, e.g., (2^9, 2^8) reveals a 3-layer MLP with l, 512, 256, ϕ 3 dimensions from input to output. The decoder with the size of (2^9, 2^8) can achieve the best RR performance. As a result, we first pre-train a model on a dataset with lower distance, where d [5, 20]. Then the pre-trained model is further finetuned on [5, d2] (d2 30) to guarantee convergence.