Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Toward Robust Spiking Neural Network Against Adversarial Perturbation

Authors: LING LIANG, Kaidi Xu, Xing Hu, Lei Deng, Yuan Xie

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets.1 The experimental results show that we can achieve a maximum 37.7% attack error reduction with 3.7% original accuracy loss.
Researcher Affiliation Collaboration Ling Liang UC Santa Barbara EMAIL Kaidi Xu Drexel University EMAIL Xing Hu SKL of Processors Institute of Computing Technology, CAS EMAIL Lei Deng Tsinghua University EMAIL Yuan Xie Alibaba Group EMAIL
Pseudocode Yes Algorithm 1 S-IBP; Algorithm 2 S-CROWN
Open Source Code Yes Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets1. 1https://github.com/liangling76/certify_snn
Open Datasets Yes Our proposed methods are evaluated on MNIST [18], FMNIST [32] and NMNIST [24] datasets1. 1https://github.com/liangling76/certify_snn
Dataset Splits No The paper mentions 'test data' but does not specify validation splits explicitly. 'In the original training, we adopt BPTT based training [31]. We train 80 epochs for each SNN model.'
Hardware Specification Yes The hardware we used is one Nvidia RTX3090 GPU and one AMD Ryzen CPU.
Software Dependencies No The paper states 'Our experiments are conducted by Pytorch 1.8.' This is one software component with a version, but it does not list multiple key components or a self-contained solver/package with specific version numbers.
Experiment Setup Yes In the original training, we adopt BPTT based training [31]. We train 80 epochs for each SNN model. The learning rate is set to 0.01 at the beginning, it decays to 0.001 at the 55th epoch. In robust training, we use the lower bound of S-CROWN as the loss function. During robust training, we set to 0 at the beginning. It will increase linearly to the ๏ฌnal during the ๏ฌrst 250 training epochs. In the last 50 training epochs, is unchanged.