A Differentiable Point Process with Its Application to Spiking Neural Networks
Authors: Hiroshi Kajino
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We investigate the effectiveness of our gradient estimator through numerical simulation. |
| Researcher Affiliation | Industry | IBM Research Tokyo, Tokyo, Japan. Correspondence to: Hiroshi Kajino <kajino@jp.ibm.com>. |
| Pseudocode | Yes | Algorithm 1 Thinning algorithm for MPP Algorithm 2 Thinning algorithm for PP Algorithm 3 Generic learning algorithm |
| Open Source Code | Yes | All the experiments are conducted on IBM Cloud4, and the code is publicly available (Kajino, 2021). ... Kajino, H. diffsnn, 2021. URL https://github.com/ ibm-research-tokyo/diffsnn. |
| Open Datasets | No | Data set. We use a synthetic data set generated by the vanilla SNN (Equation (7)). ... We generate training/test sets consisting of Ntrain/100 examples of length 50 respectively. |
| Dataset Splits | Yes | We generate training/test sets consisting of Ntrain/100 examples of length 50 respectively. |
| Hardware Specification | Yes | All the experiments are conducted on IBM Cloud4, and the code is publicly available (Kajino, 2021). [Footnote 4]: Intel Xeon Gold 6248 2.50GHz 48 cores and 192GB memory. |
| Software Dependencies | No | The paper mentions using "Ada Grad (Duchi et al., 2011)" as an optimizer but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, etc.). |
| Experiment Setup | Yes | Network size D = 6, |O| = 2, |H| = 4 Activation/filter functions a = 5, L = 2, s1 = 0, s2 = 10 PP τ = 0.3, λ = 20 # of samplings 100 (Eq. (5)), 1 (Eq. (9)). ... with initial learning rate 0.05 for 10 epochs. |