Point-to-Spike Residual Learning for Energy-Efficient 3D Point Cloud Classification

Authors: Qiaoyun Wu, Quanxiao Zhang, Chunyu Tan, Yun Zhou, Changyin Sun

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of 3D point cloud classification on both the synthetic dataset Model Net40 (Wu et al. 2015) and the real dataset Scan Object NN (Uy et al. 2019). Ablation study: We first conduct ablation experiments on Model Net40 to determine the final architecture of P2SRes LNet. Table 2 reports the evaluation results on Model Net40 testing set. Table 7 presents the comparison results from three types of classification networks on two benchmark datasets.
Researcher Affiliation Academia Qiaoyun Wu1,2,3, Quanxiao Zhang1,2,3, Chunyu Tan1,2,3, Yun Zhou1,4, Changyin Sun1,2,3 1School of Artificial Intelligence, Anhui University 2Engineering Research Center of Autonomous Unmanned System Technology, Ministry of Education 3Anhui Provincial Engineering Research Center for Unmanned System and Intelligent Technology 4Institute of Artificial Intelligence, Hefei Comprehensive National Science Center wuqiaoyun@ahu.edu.cn, zhouy@ahu.edu.cn
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper states 'Our implementation is based on Py Torch and Spiking Jelly (Fang et al. 2020)', but it does not explicitly state that the authors' code for this work is open-source or provide a link to it.
Open Datasets Yes We evaluate the performance of 3D point cloud classification on both the synthetic dataset Model Net40 (Wu et al. 2015) and the real dataset Scan Object NN (Uy et al. 2019).
Dataset Splits No The paper specifies training and testing set sizes for ModelNet40 (training: 9,843, testing: 2,468) and Scan Object NN (training: 11,416, testing: 2,882), but it does not explicitly provide details about a separate validation split or its size.
Hardware Specification Yes Our experiments are conducted on a PC with the 11-th Gen Intel i7 11700K 3.60GHz 16-core processor and a NVDIA Ge Force RTX 3070 GPU.
Software Dependencies No Our implementation is based on Py Torch and Spiking Jelly (Fang et al. 2020). The paper mentions the software used but does not specify version numbers for PyTorch or Spiking Jelly.
Experiment Setup Yes We update the network parameters using the SGD optimizer and the learning rate is initialized to 10 3. The sampling radius of the first point cloud down-sampling layer is an important hyperparameter. In follow-up experiments, we set it for Model Net40 to 0.15, for Scan Object NN to 0.3 by default. For the spiking neurons, we set the time latency T to 1 by default.