Deep Confidence Guided Distance for 3D Partial Shape Registration
Authors: Dvir Ginzburg, Dan Raviv706-714
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimentation CGD-net performance is compared to numerous learnable methods as DCP, PRNet, RPM-net, and DWC, as well as to axiomatic methods as ICP and Go-ICP. We evaluate the registration capability on multiple datasets as Model Net40 (Wu et al. 2015), Stanford Bunny (Turk and Levoy 1994), 3DMATCH (Zeng et al. 2017) and FAUST scans (Bogo et al. 2014). The discrepancy between the predicted R, t and the ground truth transformations is measured using the root mean squared error (RMSE) metric. |
| Researcher Affiliation | Academia | Dvir Ginzburg,1 Dan Raviv 1 1 Tel Aviv University dvirginzburg@mail.tau.ac.il, darav@tauex.tau.ac.il |
| Pseudocode | No | The paper describes the architecture and steps of the method in text and with diagrams (e.g., Figure 2), but it does not include formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not include any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate the registration capability on multiple datasets as Model Net40 (Wu et al. 2015), Stanford Bunny (Turk and Levoy 1994), 3DMATCH (Zeng et al. 2017) and FAUST scans (Bogo et al. 2014). |
| Dataset Splits | No | The paper states, 'The official train-test split of Model Net40, where 9843 samples are defined as train samples (80% of the dataset), and the rest 2,568 samples are the test set.' While it specifies train and test splits, it does not explicitly mention a separate validation split or its details. |
| Hardware Specification | Yes | We train and evaluate CGD-net on a single RTX8000 GPU. |
| Software Dependencies | No | The paper mentions 'PyTorch (Paszke et al. 2019)' but does not specify a version number for it or any other key software dependencies. |
| Experiment Setup | Yes | The hierarchical feature extraction network consists of two downsampling modules, with a FPS factor of 0.5, 0.25 respectively, leaving 125 points in the bottleneck of the network. The output feature dimension is 1024. We use Leaky Relu activation function (Xu et al. 2015) with a negative slope of 0.2, and Norm Layer normalization (Ba, Kiros, and Hinton 2016) for all the layers. The initial learning rate is set to 5e 4 with a multiplicative scheduler of γ = 0.9 every 10 epochs. The entire training takes up to 50 epochs for the longest configuration. |