PrivateSNN: Privacy-Preserving Spiking Neural Networks
Authors: Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda1192-1200
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on various datasets including CIFAR10, CIFAR100, and Tiny Image Net, highlighting the importance of privacy-preserving SNN training. |
| Researcher Affiliation | Academia | Youngeun Kim, Yeshwanth Venkatesha, Priyadarshini Panda, Department of Electrical Engineering Yale University New Haven, CT, USA {youngeun.kim, yeshwanth.venkatesha, priya.panda}@yale.edu |
| Pseudocode | Yes | Algorithm 1: Class representative image generation, Algorithm 2: Private SNN Approach, Algorithm 3: Directly generate class representation from SNNs (Attack scenario 2) |
| Open Source Code | No | The paper states 'Our implementation is based on Pytorch (Paszke et al. 2017)' but does not provide any specific link or statement about making their source code publicly available for the described methodology. |
| Open Datasets | Yes | We evaluate our Private SNN on three public datasets (i.e., CIFAR-10 (Krizhevsky and Hinton 2009), CIFAR-100 (Krizhevsky and Hinton 2009), Tiny-Image Net (Deng et al. 2009)). |
| Dataset Splits | No | The paper mentions using '5000, 10000, 10000 synthetic samples for training SNNs' and evaluates on a 'test set', but does not explicitly describe a validation set or specific train/validation/test splits for the synthetic or original datasets in a reproducible manner. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper states 'Our implementation is based on Pytorch (Paszke et al. 2017)' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Experimental Setting: For post-conversion training, we use Adam with base learning rate 1e-4. Here, we use 5000, 10000, 10000 synthetic samples for training SNNs on CIFAR10, CIFAR100, and Tiny Image Net, respectively. We use step-wise learning rate scheduling with a decay factor of 10 at 50% and 70% of the total number of epochs. We set the total number of epochs to 20 for all datasets. For ondevice distillation, we set m and τ to 0.7 and 20 in Eq. 3, respectively. For class representation, we set fblur and η in Algorithm 1 to 4 and 6, respectively. For attack scenario 2 in Algorithm 3, we set ζ to 0.01. All detailed experimental setup and hyperparameters are described in Supplementary B. |