Credit Assignment Through Broadcasting a Global Error Vector

Authors: David Clark, L F Abbott, Sueyeon Chung

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that this form of global-error learning is surprisingly powerful, performing on par with BP in VNNs and overcoming DFA s inability to train convolutional layers. ... Here, we show that GEVB performs well in practice.
Researcher Affiliation Academia David G. Clark, L.F. Abbott, Sue Yeon Chung Center for Theoretical Neuroscience Columbia University New York, NY {david.clark, lfabbott, sueyeon.chung}@columbia.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code accompanying our paper is available at https://github.com/davidclark1/Vectorized Nets.
Open Datasets Yes We trained models on MNIST [37] and CIFAR-10 [38]
Dataset Splits No The paper does not explicitly provide details about training/test/validation dataset splits, such as percentages or sample counts for a validation set.
Hardware Specification Yes Training lasted 10 days using five NVIDIA GTX 1080 Ti GPUs.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as Python or PyTorch versions, only mentions the use of 'Adam' as an optimizer without a version.
Experiment Setup Yes We used Adam for a fixed number of epochs (namely, 190), stopping early at zero training error. For each experiment, we performed five random initializations. Mixed-sign networks were initialized using He initialization, and nonnegative networks were initialized using ON/OFF initialization with an underlying He initialization [36].