Spike-based causal inference for weight alignment

Authors: Jordan Guerguiev, Konrad Kording, Blake Richards

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that this algorithm rapidly makes the backward weights approximate the forward weights. As the backward weights become correct, this improves learning performance over feedback alignment on tasks such as Fashion-MNIST, SVHN, CIFAR-10 and VOC.
Researcher Affiliation Academia Jordan Guerguiev1,2, Konrad P. Kording3, Blake A. Richards4,5,6,7,* 1 Department of Biological Sciences, University of Toronto Scarborough, Toronto, ON, Canada 2 Department of Cell and Systems Biology, University of Toronto, Toronto, ON, Canada 3 Department of Bioengineering, University of Pennsylvania, PA, United States 4 Mila, Montreal, QC, Canada 5 Department of Neurology & Neurosurgery, Mc Gill University, Montreal, QC, Canada 6 School of Computer Science, Mc Gill University, Montreal, QC, Canada 7 Canadian Institute for Advanced Research, Toronto, ON, Canada
Pseudocode No The paper describes the RDD algorithm using equations and text (Section 4.4) but does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide a direct link or explicit statement indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We trained the same network architecture (see Appendix A.3) on the Fashion-MNIST, SVHN, CIFAR-10 and VOC datasets using standard autograd techniques (backprop), feedback alignment and our RDD feedback training phase.
Dataset Splits No The paper states 'Inputs were randomly cropped and flipped during training, and batch normalization was used at each layer. Networks were trained using a minibatch size of 32.' but does not specify the validation dataset splits or proportions.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes Networks were trained using a minibatch size of 32. During the feedback training phase, the LIF network undergoes a training phase lasting 90 s of simulated time (30 s per set of feedback weights). ω is a hyperparameter. γ is a hyperparameter. Y are the feedback weights between layers l + 1 and l, and η and λWD are learning rate and weight decay hyperparameters, respectively.