Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Credit Assignment Through Broadcasting a Global Error Vector

Authors: David Clark, L F Abbott, Sueyeon Chung

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results show that this form of global-error learning is surprisingly powerful, performing on par with BP in VNNs and overcoming DFA s inability to train convolutional layers. ... Here, we show that GEVB performs well in practice.
Researcher Affiliation Academia David G. Clark, L.F. Abbott, Sue Yeon Chung Center for Theoretical Neuroscience Columbia University New York, NY EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code accompanying our paper is available at https://github.com/davidclark1/Vectorized Nets.
Open Datasets Yes We trained models on MNIST [37] and CIFAR-10 [38]
Dataset Splits No The paper does not explicitly provide details about training/test/validation dataset splits, such as percentages or sample counts for a validation set.
Hardware Specification Yes Training lasted 10 days using five NVIDIA GTX 1080 Ti GPUs.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers, such as Python or PyTorch versions, only mentions the use of 'Adam' as an optimizer without a version.
Experiment Setup Yes We used Adam for a fixed number of epochs (namely, 190), stopping early at zero training error. For each experiment, we performed five random initializations. Mixed-sign networks were initialized using He initialization, and nonnegative networks were initialized using ON/OFF initialization with an underlying He initialization [36].