Robust Stable Spiking Neural Networks
Authors: Jianhao Ding, Zhiyu Pan, Yujia Liu, Zhaofei Yu, Tiejun Huang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show the effectiveness of the overall training framework, which significantly improves adversarial robustness in image recognition on the CIFAR-10 and CIFAR-100 datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science, Peking University, Beijing, China 100871 2Institute for Artificial Intelligence, Peking University, Beijing, China 100871. |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Please refer to https://github.com/Ding Jianhao/ stable-snn for our code implementation. |
| Open Datasets | Yes | We conduct experiments to verify our method to construct a robust, stable SNN for the image classification task. ... for the CIFAR-10 and CIFAR-100 datasets. |
| Dataset Splits | No | The paper mentions training on CIFAR-10 and CIFAR-100 datasets, which have standard train/test splits, but does not explicitly specify how a validation split was created or used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. It only mentions using "float16 floating point precision during training". |
| Software Dependencies | No | The paper describes the use of SGD optimizer and STBP training algorithm, but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The time step to infer SNN is set to 8 by default. ... The number of training epochs is 100. The batch size is 64. ... We use the SGD optimizer with an initial learning rate of 0.1. During training, the learning rate will decay to 0 in a cosine manner. The leakage factor for all SNNs is equal to 0.99. For models without regularization, we add l2 regularization terms with an intensity of 0.0005 during the model training process. |