Enhancing Adversarial Robustness in SNNs with Sparse Gradients

Authors: Yujia Liu, Tong Bu, Jianhao Ding, Zecheng Hao, Tiejun Huang, Zhaofei Yu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of our approach through extensive experiments on both image-based and eventbased datasets. The results demonstrate notable improvements in the robustness of SNNs.
Researcher Affiliation Academia 1NERCVT, School of Computer Science, Peking University, China 2National Key Laboratory for Multimedia Information Processing, Peking University, China 3School of Computer Science, Peking University, China 4Institution for Artificial Intelligence, Peking University, China. Correspondence to: Zhaofei Yu <yuzf12@pku.edu.cn>.
Pseudocode Yes The overall training algorithm is presented as Algorithm 1.
Open Source Code Yes The main contributions of our work are as follows and the code of this work is accessible at https://github. com/putshua/gradient_reg_defense.
Open Datasets Yes In this section, we evaluate the performance of the proposed SR strategy on image classification tasks using the CIFAR10, CIRAR-100 and CIFAR10-DVS datasets.
Dataset Splits No The paper does not provide specific details about training, validation, and test dataset splits in terms of percentages or explicit sample counts, nor does it cite a source for predefined splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or specific cloud computing instance specifications).
Software Dependencies No The paper mentions software components like "Cross Entropy loss function", "Stochastic Gradient Descent optimizer", and refers to specific strategies by citation (e.g., "cosine annealing strategy (Loshchilov & Hutter, 2017)", "Backpropagation Through Time (BPTT) algorithm"), but it does not provide specific version numbers for any of these software dependencies, such as PyTorch, TensorFlow, or specific library versions.
Experiment Setup Yes We use the same training settings for all architectures and datasets. Our data augmentation techniques include Random Crop, Random Horizontal Flip, and zero-mean normalization. During training, we use the Cross Entropy loss function and Stochastic Gradient Descent optimizer with momentum. The learning rate η is controlled by the cosine annealing strategy (Loshchilov & Hutter, 2017). We utilize the Backpropagation Through Time (BPTT) algorithm with a triangle-shaped surrogate function, as introduced by (Esser et al., 2016). When incorporating sparsity gradient regularization, we set the step size of the finite difference method to 0.01. Also, we use a λ = 0.002 on CIFAR-10/CIFAR10-DVS and λ = 0.001 on CIFAR-100 for SR* method. For vanilla SR, we set λ = 0.008 on CIFAR-10 and λ = 0.002 on CIFAR-100/CIFAR10-DVS. The detailed training hyper-parameters are listed below. Table 5: Detailed training setting.