Biologically Inspired Dynamic Thresholds for Spiking Neural Networks

Authors: Jianchuan Ding, Bo Dong, Felix Heide, Yufei Ding, Yunduo Zhou, Baocai Yin, Xin Yang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of the proposed BDETT on robot obstacle avoidance and continuous control tasks under both normal conditions and various degraded conditions, including noisy observations, weights, and dynamic environments. We find that the BDETT outperforms existing static and heuristic threshold approaches by significant margins in all tested conditions
Researcher Affiliation Academia Jianchuan Ding1 Bo Dong2 Felix Heide2 Yufei Ding3 Yunduo Zhou1 Baocai Yin1 Xin Yang1 1Dalian University of Technology, Department of Computer Science 2Princeton University, Department of Computer Science 3University of California-Santa Barbara, Department of Computer Science
Pseudocode No The paper describes the proposed BDETT scheme using mathematical equations (Eqs. 1-6) and provides a conceptual illustration (Figure 1), but it does not include a pseudocode block or a clearly labeled algorithm.
Open Source Code No We will release the code upon acceptance.
Open Datasets Yes For the Half Cheetah-v3 and Ant-v3 control outputs (see Figure 3a) from the Open AI gym [41]... Specifically, we train the SCNN model [48] on the MNIST dataset [49] as our baseline model.
Dataset Splits No The paper does not explicitly provide information about validation dataset splits or specific percentages for any of the datasets used. For image classification, it mentions "training and testing" but not validation.
Hardware Specification No The paper mentions "mobile-GPUs such as the Nvidia Jetson TX2" and "Loihi neuromorphic processor" when discussing energy efficiency comparisons with existing work, but it does not specify the hardware used for running their own experiments within the provided text.
Software Dependencies No The paper mentions using "Open AI gym [41]", "twin-delayed deep deterministic policy gradient off-policy algorithm [42]", and the "SCNN model [48]", but it does not specify version numbers for any of these software components or libraries.
Experiment Setup Yes We set the batch size to 256 and the learning rate to 0.00001 for both the actor and critic networks during the training process. In addition, we use the following hyperparameter settings for the proposed BDETT: η = 0.01 and ψ = 4.0 for the DET and C = 3.0 for the DTT... Following the training protocol of Pop SAN [35], we set the batch size to 100 and the learning rate to 0.0001 for both the actor and critic networks. The reward discount factor is set to 0.99, and the maximum length of the replay buffer is set to 1 million.