Backpropagation for Energy-Efficient Neuromorphic Computing
Authors: Steve K. Esser, Rathinakumar Appuswamy, Paul Merolla, John V. Arthur, Dharmendra S. Modha
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate, we trained a sparsely connected network that runs on the True North chip using the MNIST dataset. With a high performance network (ensemble of 64), we achieve 99.42% accuracy at 108 µJ per image, and with a high efficiency network (ensemble of 1) we achieve 92.7% accuracy at 0.268 µJ per image. |
| Researcher Affiliation | Industry | Steve K. Esser IBM Research Almaden 650 Harry Road, San Jose, CA 95120 sesser@us.ibm.com |
| Pseudocode | No | The paper describes equations and procedures in text and with diagrams, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | We applied the training method described above to the MNIST dataset [20] |
| Dataset Splits | No | The paper mentions training parameters like 'mini batch size 100' and data transformations, but does not specify the explicit percentages or counts for training, validation, and test splits of the dataset. |
| Hardware Specification | Yes | We use the True North neurosynaptic chip [7] as our example deployment system, though the approach here could be generalized to other neuromorphic hardware [4][5][6]. The True North chip consists of 4096 cores, with each core containing 256 axons (inputs), a 256 256 synapse crossbar, and 256 spiking neurons. |
| Software Dependencies | No | The paper does not specify any software libraries, frameworks, or their version numbers used for the experiments. |
| Experiment Setup | Yes | For the results shown below, we used mini batch size 100, momentum 0.9, dropout 0.5 [18], learning rate decay on a fixed schedule across training iterations starting at 0.1 and multiplying by 0.1 every 250 epochs, and transformations of the training data for each iteration with rotation up to 15 , shift up to 5 pixels and rescale up to 15%. |