SNN-RAT: Robustness-enhanced Spiking Neural Network through Regularized Adversarial Training

Authors: Jianhao Ding, Tong Bu, Zhaofei Yu, Tiejun Huang, Jian Liu

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on the image recognition benchmarks have proven that our training scheme can defend against powerful adversarial attacks crafted from strong differentiable approximations.
Researcher Affiliation Academia Jianhao Ding School of Computer Science Peking University Beijing, China 100871 djh01998@stu.pku.edu.cn Tong Bu Institution for Artificial Intelligence School of Computer Science Peking University Beijing, China 100871 putong30@pku.edu.cn Zhaofei Yu Institute for Artificial Intelligence School of Computer Science Peking University Beijing, China 100871 yuzf12@pku.edu.cn Tiejun Huang School of Computer Science Peking University Beijing, China 100871 tjhuang@pku.edu.cn Jian K. Liu School of Computing University of Leeds Leeds LS2 9JT j.liu9@leeds.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/putshua/SNN-RAT.
Open Datasets Yes We validate our proposed robust SNN training scheme on the image classification tasks, where the CIFAR-10 and CIFAR-100 datasets are used. ... Public datasets.
Dataset Splits No The paper mentions using CIFAR-10 and CIFAR-100 datasets but does not explicitly provide the specific training/validation/test split percentages or sample counts in the provided text.
Hardware Specification No The paper states that compute resources were included in the overall submission, but the provided text does not contain specific hardware details such as GPU/CPU models or memory specifications used for running experiments.
Software Dependencies No The paper mentions training methods and algorithms but does not specify software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We set β = 0.001 and 0.004 for VGG-11 and Wide Res Net-16, respectively. The perturbation boundary ϵ is set to 2/255 when training models. ... Without specific instructions, we set ϵ to 8/255 for all methods for the purpose of testing. For iterative methods like PGD and BIM, the attack step α = 0.01, and the step number is 7.