Temporally Efficient Deep Learning with Spikes

Authors: Peter O'Connor, Efstratios Gavves, Matthias Reisser, Max Welling

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that on MNIST, on a temporal variant of MNIST, and on Youtube-BB, a dataset with videos in the wild, our algorithm performs about as well as a standard deep network trained with backpropagation, despite only communicating discrete values between layers.
Researcher Affiliation Academia Peter O Connor, Efstratios Gavves, Matthias Reisser, Max Welling QUVA Lab University of Amsterdam Amsterdam, Netherlands {p.e.oconnor,egavves,m.reisser,m.welling}@uva.nl
Pseudocode No The paper provides mathematical formulas and descriptions for algorithms (e.g., in Appendix D with equations 19 and 20), but it does not present them in a structured pseudocode block format explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code is available at github.com/petered/pdnn.
Open Datasets Yes To evaluate our network s ability to learn, we train it on the standard MNIST dataset, as well as a variant we created called Temporal MNIST. [...] from the large, recently released Youtube-BB dataset Real et al. (2017).
Dataset Splits No The paper mentions 'training' and 'test' sets explicitly, as seen in Figure 5 and Appendix F, but does not explicitly describe a 'validation' set or methodology for hyperparameter tuning separate from the test set.
Hardware Specification No The paper does not specify the exact hardware components (e.g., CPU, GPU models, memory) used for running the experiments. It only references 'estimated energy-costs per op of Horowitz (2014)' for comparison without detailing their own setup.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks with their versions) that would be needed to reproduce the experimental environment.
Experiment Setup Yes In our experiments, we choose ηk = 0.001, krel β = 0.91, kalpha = 0.91, and initialize µ0 = 1. [...] For all experiments, the PDNN started with kα = 0.5, and this was increased to kα = 0.9 after 1 epoch.