PointMC: Multi-instance Point Cloud Registration based on Maximal Cliques

Authors: Yue Wu, Xidao Hu, Yongzhe Yuan, Xiaolong Fan, Maoguo Gong, Hao Li, Mingyang Zhang, Qiguang Miao, Wenping Ma

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments on both synthetic and real-world datasets, and the results show the proposed Point MC yields remarkable performance improvements.
Researcher Affiliation Academia 1Mo E Key Lab of Collaborative Intelligence Systems, Xidian University, Xi an, China 2School of Computer Science and Technology, Xidian University, Xi an, China 3School of Electronic Engineering, Xidian University, Xi an, China 4School of Artificial Intelligence, Xidian University, Xi an, China.
Pseudocode No The paper describes its pipeline and various modules (e.g., 'Figure 2. The pipeline of the proposed Point MC'), but it does not include any specific pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository.
Open Datasets Yes We employ Scan2CAD (Avetisyan et al., 2019) as the real-world dataset, which aligns object instances in Scan Net (Dai et al., 2017) with CAD models in Shape Net (Chang et al., 2015). To evaluate synthetic objects, we use the Model Net40 (Wu et al., 2015), which contains 12,311 CAD models belonging to 40 categories.
Dataset Splits Yes After obtaining 2,175 sets of point clouds, we used 1,523 scenes for training, 326 scenes for validation, and 326 scenes for testing.
Hardware Specification No The paper mentions 'Our network is trained using Py Torch', but it does not specify any hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper states 'Our network is trained using Py Torch', but it does not provide specific version numbers for PyTorch or any other software libraries or dependencies.
Experiment Setup Yes We optimize the network using the Adam optimizer with a weight decay of 0.001, a learning rate of 0.01. Our network is trained using Py Torch, and we train the network for 1000 epochs. All the point clouds were downsampled in 0.05m voxel size. The distance parameter σd is set to 0.05 for the synthetic dataset and 0.1 for the real-world dataset. The distance parameter dthr is set to 10 pr, where pr is a distance unit called point cloud resolution (Yang et al., 2019). Default value for compatibility threshold tc is 0.99. We select the correspondences whose confidence scores are above τ = 0.6 as inliers and the others are removed as outliers.