PRNet: Self-Supervised Learning for Partial-to-Partial Registration
Authors: Yue Wang, Justin M. Solomon
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments are divided into four parts. First, we show performance of PRNet on a partial-to-partial registration task on synthetic data in 4.1. Then, we show PRNet can generalize to real data in 4.2. Third, we visualize the keypoints and correspondences predicted by PRNet in 4.3. Finally, we show a linear SVM trained on representations learned by PRNet can achieve comparable results to supervised learning methods in 4.4. |
| Researcher Affiliation | Academia | Yue Wang Massachusetts Institute of Technology yuewangx@mit.edu Justin Solomon Massachusetts Institute of Technology jsolomon@mit.edu |
| Pseudocode | No | The paper describes the steps of the PRNet algorithm in prose and refers to Figure 1 for illustration, but does not include a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | We release our code to facilitate reproducibility and future research. |
| Open Datasets | Yes | We evaluate partial-to-partial registration on Model Net40 [62]. |
| Dataset Splits | No | The paper states 'Model Net40 is split to 9,843 for training and 2,468 for testing' and 'Model Net40 is split evenly by category into training and testing sets', but does not explicitly provide details for a validation split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper mentions software like DGCNN, Transformer, Adam, and SVM, but does not provide specific version numbers for any of these dependencies. |
| Experiment Setup | Yes | We use DGCNN with 5 dynamic Edge Conv layers and a Transformer to learn co-contextual representations of X and Y. The number of filters in each layer of DGCNN are (64, 64, 128, 256, 512). In the Transformer, only one encoder and one decoder with 4-head attention are used. The embedding dimension is 1024. We train the network for 100 epochs using Adam [63]. The initial learning rate is 0.001 and is divided by 10 at epochs 30, 60, and 80. |