Toward Robust Spiking Neural Network Against Adversarial Perturbation
Authors: LING LIANG, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets.1 The experimental results show that we can achieve a maximum 37.7% attack error reduction with 3.7% original accuracy loss. |
| Researcher Affiliation | Collaboration | Ling Liang UC Santa Barbara lingliang@ucsb.edu Kaidi Xu Drexel University kx46@drexel.edu Xing Hu SKL of Processors Institute of Computing Technology, CAS huxing@ict.ac.cn Lei Deng Tsinghua University leideng@mail.tsinghua.edu.cn Yuan Xie Alibaba Group yuanxie@gmail.edu |
| Pseudocode | Yes | Algorithm 1 S-IBP; Algorithm 2 S-CROWN |
| Open Source Code | Yes | Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets1. 1https://github.com/liangling76/certify_snn |
| Open Datasets | Yes | Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets1. 1https://github.com/liangling76/certify_snn |
| Dataset Splits | No | The paper mentions 'test data' but does not specify validation splits explicitly. 'In the original training, we adopt BPTT based training [31]. We train 80 epochs for each SNN model.' |
| Hardware Specification | Yes | The hardware we used is one Nvidia RTX3090 GPU and one AMD Ryzen CPU. |
| Software Dependencies | No | The paper states 'Our experiments are conducted by Pytorch 1.8.' This is one software component with a version, but it does not list multiple key components or a self-contained solver/package with specific version numbers. |
| Experiment Setup | Yes | In the original training, we adopt BPTT based training [31]. We train 80 epochs for each SNN model. The learning rate is set to 0.01 at the beginning, it decays to 0.001 at the 55th epoch. In robust training, we use the lower bound of S-CROWN as the loss function. During robust training, we set to 0 at the beginning. It will increase linearly to the final during the first 250 training epochs. In the last 50 training epochs, is unchanged. |