Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
PRNet: Self-Supervised Learning for Partial-to-Partial Registration
Authors: Yue Wang, Justin M. Solomon
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments are divided into four parts. First, we show performance of PRNet on a partial-to-partial registration task on synthetic data in 4.1. Then, we show PRNet can generalize to real data in 4.2. Third, we visualize the keypoints and correspondences predicted by PRNet in 4.3. Finally, we show a linear SVM trained on representations learned by PRNet can achieve comparable results to supervised learning methods in 4.4. |
| Researcher Affiliation | Academia | Yue Wang Massachusetts Institute of Technology EMAIL Justin Solomon Massachusetts Institute of Technology EMAIL |
| Pseudocode | No | The paper describes the steps of the PRNet algorithm in prose and refers to Figure 1 for illustration, but does not include a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | We release our code to facilitate reproducibility and future research. |
| Open Datasets | Yes | We evaluate partial-to-partial registration on Model Net40 [62]. |
| Dataset Splits | No | The paper states 'Model Net40 is split to 9,843 for training and 2,468 for testing' and 'Model Net40 is split evenly by category into training and testing sets', but does not explicitly provide details for a validation split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper mentions software like DGCNN, Transformer, Adam, and SVM, but does not provide specific version numbers for any of these dependencies. |
| Experiment Setup | Yes | We use DGCNN with 5 dynamic Edge Conv layers and a Transformer to learn co-contextual representations of X and Y. The number of ο¬lters in each layer of DGCNN are (64, 64, 128, 256, 512). In the Transformer, only one encoder and one decoder with 4-head attention are used. The embedding dimension is 1024. We train the network for 100 epochs using Adam [63]. The initial learning rate is 0.001 and is divided by 10 at epochs 30, 60, and 80. |