Shape-Oriented Convolution Neural Network for Point Cloud Analysis
Authors: Chaoyi Zhang, Yang Song, Lina Yao, Weidong Cai12773-12780
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been performed to evaluate its significance in the tasks of point cloud classification and part segmentation. |
| Researcher Affiliation | Academia | 1School of Computer Science, University of Sydney, Australia 2School of Computer Science and Engineering, University of New South Wales, Australia |
| Pseudocode | No | The paper includes architectural diagrams (Fig. 1, 2, 3, 4, 5) but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the release of open-source code for the described methodology. |
| Open Datasets | Yes | We firstly evaluate our model on the Model Net40 dataset (Wu et al. 2015) for point cloud classification task. We then further evaluate our model on the Shape Net-Part dataset (Yi et al. 2016) for the point cloud segmentation task. |
| Dataset Splits | Yes | We split dataset into 12137 training objects, 1870 validation objects, and 2874 testing objects, following the official split policy announced by (Chang et al. 2015). |
| Hardware Specification | Yes | The overall training framework is implemented on Pytorch with two NVIDIA GTX 1080Ti GPUs, using a distributed training scheme with Synchronized Batch Norm proposed (Zhang et al. 2018). |
| Software Dependencies | No | The paper mentions 'Pytorch' as the implementation framework but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | We select Adam as the optimizer, with learning rate 0.001 and cosine annealing applied (Loshchilov and Hutter 2017). Batch size is set to 32, and the corresponding momentum is 0.9. The momentum of batch normalization is initially set as 0.9 and decays with a rate of 0.5 for every 30 epochs. Batch Norm and Leaky Relu are used in all layers and omitted in figures above for simplification purpose. Dropout layers (with dropout rate = 0.5) are adopted within fclassification. |