Biologically-plausible backpropagation through arbitrary timespans via local neuromodulators

Authors: Yuhan Helena Liu, Stephen Smith, Stefan Mihalas, Eric Shea-Brown, Uygar Sümbül

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our mathematical analysis of Mod Prop learning, together with simulation results on benchmark temporal tasks, demonstrate the advantage of Mod Prop over existing biologically-plausible temporal credit assignment rules. To test the Mod Prop formulation, we study its performance in well-known tasks involving temporal processing: pattern generation, delayed XOR, and sequential MNIST.
Researcher Affiliation Academia 1Department of Applied Mathematics, University of Washington, Seattle, WA, USA 2Allen Institute for Brain Science, 615 Westlake Ave N, Seattle WA, USA 3Computational Neuroscience Center, University of Washington, Seattle, WA, USA 4Department of Molecular and Cellular Physiology, Stanford University, Stanford CA, USA
Pseudocode No The paper presents mathematical derivations and model descriptions but does not include any pseudocode or explicitly labeled algorithm blocks.
Open Source Code Yes Anonymized code for this paper is available at: [https://anonymous.4open.science/r/bio_mod_prop-FC52]
Open Datasets Yes Finally, we study the pixel-by-pixel MNIST [58] task, which is a popular machine learning benchmark. The sequential MNIST experiment was performed on the MNIST dataset [58].
Dataset Splits No The paper mentions using the MNIST dataset but does not explicitly specify the training, validation, and test splits (e.g., percentages or sample counts) used for reproducibility.
Hardware Specification Yes All simulations were performed in Python 3.8.10 with PyTorch 1.10.0 [67] on a desktop computer with a 3.70GHz AMD Ryzen Threadripper 3970X 32-Core Processor, 256GB of RAM, and a NVIDIA GeForce RTX 3090 GPU.
Software Dependencies Yes All simulations were performed in Python 3.8.10 with PyTorch 1.10.0 [67]...
Experiment Setup Yes We train our RNNs using Adam optimizer [65] with learning rate 0.001 and weight decay 1e-5 for 500 epochs for all tasks. pattern generation: 60 units (48 excitatory, 12 inhibitory), initial recurrent weights were sampled from a Gaussian distribution N(0, 0.01/sqrt(N_in))