SK-Net: Deep Learning on Point Cloud via End-to-End Discovery of Spatial Keypoints
Authors: Weikun Wu, Yan Zhang, David Wang, Yunqi Lei6422-6429
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to evaluate the performance of our method, which is better than or comparable with the state-of-the-art approaches. We also present an ablation study to demonstrate the advantages of SK-Net. |
| Researcher Affiliation | Academia | 1Computer Science Department, Xiamen University, China 2School of Mathematics Science, Guizhou Normal University, China 3Department of Electrical and Computer Engineering, The Ohio State University, USA |
| Pseudocode | No | The paper describes the architecture and modules (e.g., PDE module) in detail using text and diagrams, but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code availability or a link to a code repository. |
| Open Datasets | Yes | Datasets We validate on four datasets to demonstrate the effectiveness of our SK-Net. Object classification on Model Net(Wu et al. 2015) is evaluated by accuracy, part segmentation on Shape Net Part(Yi et al. 2016) is evaluated by mean Intersection over Union (m Io U) on points and semantic scene labeling on Scan Net(Dai et al. 2017) is evaluated by per-point accuracy. |
| Dataset Splits | Yes | Model Net10 consists of 4, 899 object instances which are split into 2, 468 training samples and 909 testing samples. Model Net40 consists of 12, 311 object instances among which 9, 843 objects belong to the training set and the other 3, 991 samples for testing. |
| Hardware Specification | Yes | We run our model on Ge Force GTX Titan X for training. |
| Software Dependencies | No | SK-Net is implemented by Tensorflow in CUDA. The paper mentions software names but does not provide specific version numbers for TensorFlow or CUDA. |
| Experiment Setup | Yes | In general, we set the number of the Skeypoints to 192, and K to 16, H to 32 in the PDE module. In most experiments, the hyperparameters δ , θ of our two regulating losses are both 0.05, and the weights of all loss terms are identical. In addition, we use Adam(Kingma and Ba 2014) optimizer with an initial learning rate of 0.001 and the learning rate is decreased by staircase exponential decay. Batch size is 16. All layers are implemented with batch normalization. PRe LU activation is applied to the layers of the point feature extraction and Skeypoint inference components, while Re LU activation is applied to every layer of the subsequent network. |