Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies

Authors: Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothée Masquelier, Yonghong Tian

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the PSN family on simulation speed and temporal/static data classification, and the results show the overwhelming advantage of the PSN family in efficiency and accuracy.
Researcher Affiliation Academia 1School of Computer Science, Peking University, China 2Peng Cheng Laboratory, China 3School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China 4School of Artificial Intelligence, Peking University, China 5Department of Computer Science and Engineering, Shanghai Jiao Tong University, China 6Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS Univ. Toulouse 3, France
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our codes are available at https://github.com/fangwei123456/Parallel-Spiking-Neuron.
Open Datasets Yes We evaluated the PSN family on the static CIFAR10, Image Net datasets, and the neuromorphic CIFAR10-DVS [54] dataset.
Dataset Splits No The details of the training are provided in the supplementary materials.
Hardware Specification No The paper mentions 'CUDA devices' but does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running experiments.
Software Dependencies No The paper mentions 'Py Torch', 'Intel MKL', 'cu BLAS', and 'Spiking Jelly [52]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes For the masked PSN, λ = min(1, 8 epoch/(epochs − 1)), where epoch denotes the current training epoch, and epochs denotes the total number of epochs. The details of the training are provided in the supplementary materials.