Biological credit assignment through dynamic inversion of feedforward networks

Authors: Bill Podlaski, Christian K. Machens

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested dynamic inversion (DI) and non-dynamic inversion (NDI) against backpropagation (BP), feedback alignment (FA), and pseudobackprop (PBP) on four modest supervised and unsupervised learning tasks linear regression, nonlinear regression, MNIST classification, and MNIST autoencoding.
Researcher Affiliation Academia William F. Podlaski Champalimaud Research Champalimaud Centre for the Unknown 1400-038 Lisbon, Portugal Christian K. Machens Champalimaud Research Champalimaud Centre for the Unknown 1400-038 Lisbon, Portugal Correspondence: {william.podlaski, christian.machens}@research.fchampalimaud.org
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks (e.g., a section labeled 'Algorithm' or 'Pseudocode').
Open Source Code No The paper does not provide any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We next tested dynamic inversion on the MNIST handwritten digit dataset, where we use the standard training and test datasets (Le Cun et al., 1998)...
Dataset Splits No The paper mentions using 'standard training and test datasets' for MNIST, but does not explicitly provide details for a separate validation split or specific percentages/counts for all three splits (train/validation/test).
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Learning rate =10 2 for all algorithms. and DI was simulated numerically using 1000 Euler steps with dt = 0.5. and mini-batch training (100 examples per batch). and learning rate = 10 3 for all algorithms. and learning rate = 10 6 for all algorithms.