Spiking Convolutional Neural Networks for Text Classification

Authors: Changze Lv, Jianhan Xu, Xiaoqing Zheng

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted four sets of experiments. The first is to evaluate the accuracy of the SNNs trained with the proposed method on 6 different text classification benchmarks for both English and Chinese by comparing to their DNN counterparts.
Researcher Affiliation Academia Changze Lv, Jianhan Xu, and Xiaoqing Zheng School of Computer Science, Fudan University, Shanghai 200433, China Shanghai Key Laboratory of Intelligent Information Processing {czlv18,jianhanxu20,zhengxq}@fudan.edu.cn
Pseudocode Yes Algorithm 1 The global algorithm of conversion + fine-tuning for training spiking neural networks.
Open Source Code Yes Finally, the source code is avaliable at https://github. com/Lvchangze/snn.
Open Datasets Yes We used the following 6 text classification datasets to evaluate the SNNs trained with the proposed method, four of which are English datasets and the other two are Chinese benchmarks: MR (Pang & Lee, 2005), SST-5 (Socher et al., 2013), SST-2 (the binary version of SST-5), Subj, Chn Senti, and Waimai. These datasets vary in the size of examples and the length of texts. If there is no standard training-test split, we randomly select 10% examples from the entire dataset as the test set. We describe the datasets used for evaluation in Appendix A.2.
Dataset Splits No The paper states: 'If there is no standard training-test split, we randomly select 10% examples from the entire dataset as the test set.' This describes the test split but does not specify a validation set split percentage or size. It uses a 'fine-tuning stage' but does not explicitly define how data is split for validation during this stage.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments. It only refers to 'neuromorphic hardware' in general terms.
Software Dependencies No The paper mentions 'SnnTorch framework' and 'PyTorch', but it does not provide specific version numbers for these software dependencies (e.g., PyTorch 1.9 or SnnTorch 0.1).
Experiment Setup Yes When training the tailored networks, we set the dropout rate to 0.5, the batch size to 32, and the learning rate to 1e 4. We set the number of time steps to 50, the membrane threshold Uthr to 1, the decay rate β to 1, the batch size to 50, and the learning rate to 5e 5 at the fine-tuning stage of SNNs. We used Text CNN (Kim, 2014) as the neural network architecture from which the tailored network is built, and filter widths of 3, 4, and 5 with 100 feature maps each. Unless otherwise specified, we set h to 10 in all the experiments.