Certified Adversarial Robustness for Rate Encoded Spiking Neural Networks

Authors: Bhaskar Mukhoty, Hilal AlQuabeh, Giulia De Masi, Huan Xiong, Bin Gu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental validation of the method is performed across various static image datasets, including CIFAR-10, CIFAR-100 and Image Net-100.
Researcher Affiliation Collaboration Bhaskar Mukhoty1, , Hilal Al Quabeh 1, , Giulia De Masi2,3, Huan Xiong1,4, , Bin Gu1,5, 1 Mohamed bin Zayed University of Artificial Intelligence, UAE 2 ARRC, Technology Innovation Institute, UAE 3 Bio Robotics Institute, Sant Anna School of Advanced Studies, Pisa, Italy 4 Harbin Institute of Technology, China 5 School of Artificial Intelligence, Jilin University, China
Pseudocode Yes Algorithm 1 predict g T (x); Algorithm 2 certify g T around x
Open Source Code Yes The code is available at https://github.com/Bhaskar Mukhoty/Certified SNN.
Open Datasets Yes We use standard static image dataset such as CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011) and Image Net-100 (Deng et al., 2009).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification Yes Table 9 reports time required by each epoch of various adversarial training methods on a single NVIDIA RTX A6000 GPU.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9). It mentions the use of 'Straight Through Estimator (STE)' but without specific versioning for software dependencies.
Experiment Setup Yes Table 8 reports the training hyper-parameters used across the four datasets. This includes 'Number epochs 200', 'Mini batch size 64', 'LIF: β 0.5', 'LIF: uth 1', 'Learning Rate 0.1', 'Optimizer: SGD with momentum: 0.9, weight decay: 5 10 4, Rate Scheduler: cosine annealing', 'FGSM/PGD/GN: ϵ 8/255', 'PGD (train): η 2/255', 'PGD (train) Iteration 4', 'PGD (test): η 2.55/255', 'PGD (test) Iteration 7'.