IM-Loss: Information Maximization Loss for Spiking Neural Networks
Authors: Yufei Guo, Yuanpei Chen, Liwen Zhang, Xiaode Liu, Yinglei Wang, Xuhui Huang, Zhe Ma
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both popular non-spiking static and neuromorphic datasets show that the SNN models trained by our method outperform the current state-of-the-art algorithms. |
| Researcher Affiliation | Collaboration | Yufei Guo , Yuanpei Chen , Liwen Zhang, Xiaode Liu, Yinglei Wang, Xuhui Huang, Zhe Ma Intelligent Science & Technology Academy of CASIC yfguo@pku.edu.cn, rop477@163.com, mazhe_thu@163.com |
| Pseudocode | Yes | The algorithm of the training process of our method is presented in Appendix A.2. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The experiments include widely-used network structures including spiking CIFARNet [33], Res Net-19 [38], modified VGG-16 [26], and Res Net-34 [12] for both popular non-spiking static and neuromorphic datasets: CIFAR10/100, Image Net (ILSVRC12) and CIFAR10-DVS [18]. |
| Dataset Splits | Yes | The experiments include widely-used network structures including spiking CIFARNet [33], Res Net-19 [38], modified VGG-16 [26], and Res Net-34 [12] for both popular non-spiking static and neuromorphic datasets: CIFAR10/100, Image Net (ILSVRC12) and CIFAR10-DVS [18]. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions general machine learning frameworks like Pytorch and Tensorflow but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We set λ as 2 in this paper... In practice, we set Kmin = 100 and Kmax = 101... Both networks were trained with a timestep of 4 and without normalization... For being friendly with neuromorphic hardware, the max-pooling layer was replaced with the average-pooling layer in the used network architectures. |