Training Spiking Neural Networks with Local Tandem Learning
Authors: Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the effectiveness of the proposed LTL rule on the image classification task with CIFAR-10 [29], CIFAR-100 [29], and Tiny Image Net [55] datasets. |
| Researcher Affiliation | Collaboration | 1National University of Singapore 2The Hong Kong Polytechnic University 3University of Electronic Science and Technology of China 4China Nanhu Academy of Electronics and Information Technology 5The Chinese University of Hong Kong, Shenzhen, China 6Kriston AI, Xiamen, China |
| Pseudocode | No | The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | More details about the experimental datasets and implementation details are provided in the Supplementary Materials Section B, and the source code can be found at2. 2https://github.com/Aries231/Local_tandem_learning_rule |
| Open Datasets | Yes | In this section, we evaluate the effectiveness of the proposed LTL rule on the image classification task with CIFAR-10 [29], CIFAR-100 [29], and Tiny Image Net [55] datasets. |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Supplementary Materials Section B |
| Hardware Specification | No | The paper mentions 'GPU-based AI solutions', 'on the CPU', and 'on the GPU' generally, and refers to 'Supplementary Materials Section G' for GPU memory usage and training time, but does not specify exact GPU/CPU models or other detailed hardware specifications in the main text. |
| Software Dependencies | No | The paper mentions using specific datasets but does not explicitly list software dependencies with specific version numbers (e.g., libraries, frameworks, or solvers with version numbers) in the main text. |
| Experiment Setup | Yes | Specifically, we design experiments that progressively increase the time window Tw from 4 to 32 during training, and we compare the offline LTL rule against the STBP [58] learning rule on the CIFAR-10 dataset. To ensure a fair comparison, we use the VGG-11 architecture with the same experimental settings, except that the LTL rule uses local objectives and the STBP rule uses a global objective. Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Supplementary Materials Section B |