Neural networks grown and self-organized by noise

Authors: Guruprasad Raghavan, Matt Thomson

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also demonstrate that networks grown from a single unit perform as well as hand-crafted networks on MNIST. We demonstrate functionality of networks grown and self-organized from a single unit (figure-7c) by evaluating their train and test accuracy on a classification task. Here, we train networks to classify images of handwritten digits obtained from the MNIST dataset (figure-7e).
Researcher Affiliation Academia Guruprasad Raghavan Department of Bioengineering Caltech Pasadena, CA 91125 graghava@caltech.edu Matt Thomson Biology and Biological Engineering Caltech Pasadena, CA 91125 mthomson@caltech.edu
Pseudocode No The paper contains "Box-1" and "Box-2" which describe equations for the dynamical system and learning rule, respectively. However, these are mathematical models and formulas, not structured pseudocode or algorithm blocks. A flow chart is mentioned in supplemental materials but not provided in the main text.
Open Source Code No The paper does not provide any concrete access to source code, such as a specific repository link, an explicit code release statement, or mention of code in supplementary materials.
Open Datasets Yes Here, we train networks to classify images of handwritten digits obtained from the MNIST dataset (figure-7e).
Dataset Splits No The paper specifies "10000 training samples and 1000 testing samples" but does not mention a separate validation set or specific details about how the data was split for training, testing, or validation beyond these counts.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, that would be needed to replicate the experiment.
Experiment Setup No The paper describes parameters for the Izhikevich neuron model (e.g., ai, bi, ci, di, σ2, and connection weights Si,j with their radii) and mentions a learning rate (ηlearn) for the Hebbian rule. However, it does not provide specific hyperparameter values like batch size, number of epochs, or optimizer settings for the linear classifier used in the MNIST classification task, nor does it specify the value of ηlearn.