Enhancing the Robustness of Spiking Neural Networks with Stochastic Gating Mechanisms
Authors: Jianhao Ding, Zhaofei Yu, Tiejun Huang, Jian K. Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results prove that our method can be used alone or with existing robust enhancement algorithms to improve SNN robustness and reduce SNN energy consumption. |
| Researcher Affiliation | Academia | 1School of Computer Science, Peking University 2Institution for Artificial Intelligence, Peking University 3School of Computer Science, University of Birmingham |
| Pseudocode | No | The paper describes its methods in prose and uses mathematical equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Ding Jianhao/Sto G-meets-SNN/. |
| Open Datasets | Yes | To verify the effectiveness of our method, we conduct experiments on the CIFAR-10 and CIFAR100 datasets (Krizhevsky, Hinton et al. 2009). |
| Dataset Splits | No | The paper mentions using CIFAR-10 and CIFAR-100 datasets but does not explicitly state the training, validation, or test split percentages or sample counts within the main text. |
| Hardware Specification | Yes | The experiments are conducted on GPU devices of the NVIDIA RTX 3090 with PyTorch (v1.12.1). |
| Software Dependencies | Yes | The experiments are conducted on GPU devices of the NVIDIA RTX 3090 with PyTorch (v1.12.1). |
| Experiment Setup | Yes | To punish Po, we set γ = 5 10 6 by default. We train our model with white-box FGSM adversarial examples on each mini-batch of images. The perturbation boundary is 2/255 (Kundu, Pedram, and Beerel 2021). The EOT step is set to 10 by default... The intensity of the FGSM attack is 8/255. For the PGD-l attack, the overall intensity, step number, and step size are fixed to 8/255, 7, and 0.01, respectively. |