PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
Authors: Charles Ruizhongtai Qi, Li Yi, Hao Su, Leonidas J. Guibas
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our network called Point Net++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds. |
| Researcher Affiliation | Academia | Charles R. Qi Li Yi Hao Su Leonidas J. Guibas Stanford University |
| Pseudocode | No | The paper describes the architecture and layers in detail through text and diagrams (Figure 2, 3) but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link to publicly available open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate on four datasets ranging from 2D objects (MNIST [11]), 3D objects (Model Net40 [31] rigid object, SHREC15 [12] non-rigid object) to real 3D scenes (Scan Net [5]). |
| Dataset Splits | Yes | MNIST: Images of handwritten digits with 60k training and 10k testing samples. Model Net40: CAD models of 40 categories (mostly man-made). We use the official split with 9,843 shapes for training and 2,468 for testing. SHREC15: 1200 shapes from 50 categories... We use five fold cross validation to acquire classification accuracy on this dataset. Scan Net: 1513 scanned and reconstructed indoor scenes. We follow the experiment setting in [5] and use 1201 scenes for training, 312 scenes for test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions using 'a three-level hierarchical network with three fully connected layers' and input point sizes (e.g., 512 for MNIST, 1024 for ModelNet40), but it does not specify concrete hyperparameter values (like learning rate, batch size, epochs), optimizer settings, or detailed training configurations. |