DeepSPF: Spherical SO(3)-Equivariant Patches for Scan-to-CAD Estimation

Authors: Driton Salihu, Adam Misik, Yuankai Wu, Constantin Patsch, Fabian Seguel, Eckehard Steinbach

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through rigorous evaluations, we demonstrate significant enhancements in Scan-to-CAD performance for point cloud registration, retrieval, and completion: a significant reduction in the rotation error of existing registration methods, an improvement of up to 17% in the Top-1 error for retrieval tasks, and a notable reduction of up to 30% in the Chamfer Distance for completion models, all attributable to the incorporation of Deep SPF.
Researcher Affiliation Collaboration Chair of Media Technology and Munich Institute of Robotics and Machine Intelligence Department of Computer Engineering Technical University of Munich, School of Computation, Information and Technology Siemens Technology
Pseudocode No The paper describes the methodology using equations and figures, but it does not contain a formal pseudocode or algorithm block.
Open Source Code No Table 11 lists 'no' under the 'Replica' column for 'Ours', indicating that the code is not publicly available. No explicit statement or link to a code repository is provided in the text.
Open Datasets Yes For registration, we evaluate on Model Net40 (Wu et al., 2015)... The Shape Net (Chang et al., 2015) dataset... Finally, we evaluate on the real-world Scan2CAD (Avetisyan et al., 2019a) dataset. The Scan2CAD dataset consists of RGB-D scans of indoor environments from the Scan Net (Dai et al., 2017) dataset...
Dataset Splits No For registration, we evaluate on Model Net40 (Wu et al., 2015), with 1024 points uniformly sampled from each model in the dataset. The dataset consists of 40 object categories with 9843 point clouds in the training set and 2468 for the test set. While training and test sets are mentioned, there is no explicit description of a validation set or its split.
Hardware Specification Yes Each of our models is trained on one NVIDIA RTX A6000.
Software Dependencies No The paper mentions using the ADAM optimizer but does not specify version numbers for any software dependencies like programming languages or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We train Deep SPF using the ADAM optimizer with a learning rate of 0.001. For the scheduler and the number of training epochs, we are guided by the configurations of the respective works. Each of our models is trained on one NVIDIA RTX A6000. For each state-of-the-art approach, we replace the baseline encoder with our proposed Deep SPF backbone. Each method we evaluate uses the originally proposed decoder and loss functions. We choose r to be decreasing for each of the three SA of Deep SPF, as such r = [1.0, 0.5, 0.25].