Using noise to probe recurrent neural network structure and prune synapses

Authors: Eli Moore, Rishidev Chaudhuri

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Fig. 2 we show the performance of noise-prune (in the matched diagonal regime) on diagonallydominant networks with clustered structure (parameters in figure caption). We compare it to a control case in which edges are sampled and either strengthened or pruned (as in Eq. 5) but with probabilities just proportional to weight (i.e., without a covariance term and thus without accounting for higher-order network structure). The box plots in the first columns of Fig. 2a,b show the distribution of relative change in eigenvalues of the pruned network when compared to the original network
Researcher Affiliation Academia Eli Moore Department of Mathematics University of California, Davis Davis, CA 95616 elimoore@ucdavis.edu Rishidev Chaudhuri Center for Neuroscience Department of Mathematics Department of Neurobiology, Physiology and Behavior University of California, Davis Davis, CA 95616 rchaudhuri@ucdavis.edu
Pseudocode No The paper describes the pruning rule mathematically (Eqs. 4, 5) but does not provide it in pseudocode or an algorithm block format.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes generating synthetic 'clustered symmetric and non-symmetric networks' for its experiments, but it does not specify public availability of a dataset or provide a link/citation to access any data it used for training or evaluation. The networks are constructed for the purpose of the study.
Dataset Splits No The paper mentions pruning networks to certain densities (e.g., '10% density' or '20% sparsity') and evaluating performance, but it does not describe specific train/validation/test dataset splits or cross-validation procedures for an external dataset.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper does not specify any software names with version numbers or list any software dependencies required to replicate the experiments.
Experiment Setup Yes The left network of size N = 3, 000 contains 3 clusters of size 100 and 1 cluster of size 2700, with dense within-cluster connections (60%, N(1, 1)) and sparse long-range connections (5000 total, U(0, 1)).