GAIT-prop: A biologically plausible learning rule derived from backpropagation of error

Authors: Nasir Ahmad, Marcel A. J. van Gerven, Luca Ambrogioni

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a series of simple computer vision experiments, we show near-identical performance between backpropagation and GAIT-prop with a soft orthogonality-inducing regularizer.
Researcher Affiliation Academia Nasir Ahmad Marcel van Gerven Luca Ambrogioni Department of Artificial Intelligence Donders Institute for Brain, Cognition and Behaviour Radboud University, Nijmegen, the Netherlands {n.ahmad,m.vangerven,l.ambrogioni}@donders.ru.nl
Pseudocode Yes Algorithm 1 GAIT-prop (per training sample update)
Open Source Code Yes Code used to produce the results shown in this paper is available at https://github.com/nasiryahm/GAIT-prop.
Open Datasets Yes We make use of three image classification datasets: MNIST, Fashion-MNIST, and KMNIST.
Dataset Splits No The paper mentions training and testing, and a grid search for parameters, but does not explicitly detail the training/validation/test splits, such as percentages or sample counts for each split.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions the use of 'Adam optimiser' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No In order to identify acceptable parameters for each of our learning methods, we ran a grid search for the learning rate η and the orthogonal regularizer strength λ. The highest-performing networks were tested for stability and stable high performing parameters were used. Details of specific parameters used and the grid search outcomes are provided in the Supplementary Material.