PointNeXt: Revisiting PointNet++ with Improved Training and Scaling Strategies
Authors: Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, Bernard Ghanem
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through a comprehensive empirical study on various benchmarks, e.g., Scan Objec NN [42] for object classification and S3DIS [1] for semantic segmentation, we discover that training strategies, i.e., data augmentation and optimization techniques, play an important role in the network s performance. |
| Researcher Affiliation | Collaboration | 1King Abdullah University of Science and Technology (KAUST), 2Microsoft Research |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures), only architectural diagrams. |
| Open Source Code | Yes | The code and models are available at https://github.com/guochengqian/pointnext. |
| Open Datasets | Yes | We evaluate Point Ne Xt on five standard benchmarks: S3DIS [1] and Scan Net [5] for semantic segmentation, Scan Object NN [42] and Model Net40 [47] for object classification, and Shape Net Part [3] for object part segmentation. |
| Dataset Splits | Yes | Scan Net [5]... We follow the public training, validation, and test splits, with 1201, 312 and 100 scans, respectively. |
| Hardware Specification | Yes | We train Point Ne Xt using... a 32G V100 GPU, for all tasks, unless otherwise specified. |
| Software Dependencies | No | The paper mentions using 'Cross Entropy loss with label smoothing [37]', 'Adam W optimizer [25]', and 'Poly Focal Loss [17]' and refers to a PyTorch reimplementation [50] for baseline, but it does not specify exact version numbers for the software libraries or frameworks used (e.g., PyTorch version, CUDA version). |
| Experiment Setup | Yes | We train Point Ne Xt using Cross Entropy loss with label smoothing [37], Adam W optimizer [25], an initial learning rate lr = 0.001, weight decay 10 4, with Cosine Decay, and a batch size of 32, with a 32G V100 GPU, for all tasks, unless otherwise specified. ... for 100 epochs (training set is repeated by 30 times), using a fixed number of points (24, 000) per batch with a batch size of 8 as input. |