Balanced Resonate-and-Fire Neurons

Authors: Saya Higuchi, Sebastian Kairat, Sander Bohte, Sebastian Otte

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that networks of BRF neurons achieve overall higher task performance, produce only a fraction of the spikes, and require significantly fewer parameters as compared to modern RSNNs. We implement the RF, BRF, and BHRF neurons within RSNNs and applied them to simulations with several benchmark datasets.
Researcher Affiliation Academia 1Adaptive AI Lab, Institute of Robotics and Cognitive Systems University of L ubeck, Germany 2Machine Learning Group, Centrum Wiskunde & Informatica (CWI) Amsterdam, The Netherlands.
Pseudocode Yes Algorithm 1 BRF Forward Pass
Open Source Code Yes Source code avaliable at https://github.com/AdaptiveAILab/brf-neurons
Open Datasets Yes The MNIST dataset consists of grayscale 28 28 pixel hand-written digit images for classification. The sequential MNIST (S-MNIST), which converts the image to a sequence of 1 784, is a prominent benchmark dataset that enable comparison between sequential models with 54,000 images for training, 6,000 for validation, and 10,000 for testing. The Spiking Heidelberg dataset (SHD) is a benchmark audio-to-spike dataset specifically generated for SNNs (Cramer et al., 2020).
Dataset Splits Yes The sequential MNIST (S-MNIST)... with 54,000 images for training, 6,000 for validation, and 10,000 for testing. We used 7,341 sequences for training, 815 for validation, and 2,264 for inference [SHD].
Hardware Specification Yes For simulating the models and performing the experiments, we used multiple systems with different deep learning accelerators including NVIDIA Ge Force RTX 2060, NVIDIA Ge Force RTX 2080 Ti, NVIDIA Ge Force RTX 3090, and NVIDIA A100
Software Dependencies Yes Py Torch 2.0.1 on Python 3.10.4 and CUDA 11.7.
Experiment Setup Yes The BRFand BHRF-RSNN hyperparameters for each dataset are shown in Table 3 in Section A.12. Table 3. Hyperparameters applied for the best-performing BRF, HBRF, and ALIF model with label-last loss (S-MNIST) and average sequence loss (PS-MNIST, ECG, SHD) and truncation step of 50 for TBPTT (in pruning).