Single-phase deep learning in cortico-cortical networks

Authors: Will Greedy, Heng Wei Zhu, Joseph Pemberton, Jack Mellor, Rui Ponte Costa

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we demonstrate that our model can effectively backpropagate errors through multiple layers using a single-phase learning process. Next, we show both empirically and analytically that learning in our model approximates backprop-derived gradients. Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10).
Researcher Affiliation Academia Will Greedy Bristol Computational Neuroscience Unit Department of Computer Science, SCEEM University of Bristol, United Kingdom will.greedy@bristol.ac.uk; Heng Wei Zhu Bristol Computational Neuroscience Unit School of Phys., Pharm. and Neuroscience University of Bristol, United Kingdom hengwei.zhu@bristol.ac.uk; Joseph Pemberton Bristol Computational Neuroscience Unit Department of Computer Science, SCEEM University of Bristol, United Kingdom joe.pemberton@bristol.ac.uk; Jack Mellor School of Phys., Pharm. and Neuroscience University of Bristol, United Kingdom jack.mellor@bristol.ac.uk; Rui Ponte Costa Bristol Computational Neuroscience Unit Department of Computer Science, SCEEM University of Bristol, United Kingdom rui.costa@bristol.ac.uk
Pseudocode No The paper describes the model architecture and equations (e.g., Equation 1, 2) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See SM. A link to our code is provided in the SM.
Open Datasets Yes Finally, we demonstrate that our model is capable of learning complex image classification tasks (MNIST and CIFAR-10). [21] Yann Le Cun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http: //yann.lecun.com/exdb/mnist/. [22] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html.
Dataset Splits No While the checklist indicates that training details including data splits are in the SM, the main paper text does not explicitly provide validation dataset splits.
Hardware Specification Yes This work made use of the supercomputer Blue Pebble. We would also like to thank Callum Wright and the rest of the High Performance Computing team at the University of Bristol for constant and quick help with Blue Pebble.
Software Dependencies No The paper does not provide specific software dependencies with version numbers for replication, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See SM.