Exploring Temporal Information Dynamics in Spiking Neural Networks

Authors: Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Anna Hambitzer, Priyadarshini Panda

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps.
Researcher Affiliation Collaboration 1 Department of Electrical Engineering, Yale University, New Haven, CT, USA 2 Technology Innovation Institute, Abu Dhabi, UAE {youngeun.kim, yuhang.li, hyoungseob.park, yeshwanth.venkatesha, priya.panda}@yale.edu, Anna.Hambitzer@tii.ae
Pseudocode No The paper describes mathematical formulations and processes but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present any structured pseudocode.
Open Source Code Yes Code is available at https://github.com/Intelligent-Computing Lab-Yale/Exploring-Temporal-Information-Dynamics-in Spiking-Neural-Networks.
Open Datasets Yes CIFAR10 (Krizhevsky and Hinton 2009), SVHN (Netzer et al. 2011), Fashion-MNIST dataset (Xiao, Rasul, and Vollgraf 2017) and CIFAR100 (Krizhevsky and Hinton 2009).
Dataset Splits No The paper uses standard datasets like CIFAR10 and SVHN but does not explicitly provide specific train/validation/test split percentages, sample counts, or refer to a specific predefined split strategy within the paper.
Hardware Specification No The paper does not provide specific details regarding the hardware used for conducting experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions general software components like 'spatio-temporal back-propagation' and 'SGD optimizer' but does not provide specific version numbers for any libraries, frameworks (e.g., PyTorch version), or other ancillary software dependencies used for replication.
Experiment Setup Yes The default setting for all experiments is as follows: timestep 10, time constant 2, SGD optimizer with learning rate 3e-1, weight decay 5e-4, CIFAR10 dataset, and Res Net19 architecture. We use ϵ = 8 255 for FGSM attack, and [ ϵ = 8 255, α = 4 255, n = 10] for PGD attack. We select αcifar10=[1e-3, 1e-2, 7e-2], αsvhn=[1e-4, 1e-2, 7e-2], αcifar100=[1e-4, 1e-3, 1e-2], for [αlow, αintermediate, αhigh ].