Learning sparse codes from compressed representations with biologically plausible local wiring constraints

Authors: Kion Fallah, Adam Willats, Ninghao Liu, Christopher Rozell

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show analytically and empirically that unsupervised learning of sparse representations can be performed in the compressed space despite significant local wiring constraints in compression matrices of varying forms (corresponding to different local wiring patterns). Our analysis verifies that even with significant local wiring constraints, the learned representations remain qualitatively similar, have similar quantitative performance in both training and generalization error, and are consistent across many measures with measured macaque V1 receptive fields. To test the proposed model, we conduct a number of learning experiments using whitened natural image patches compressed with the BDMs and BRMs with varying degrees of localization (L) and compression ratio M = 0.5N (other compression ratios did not qualitativly change results, shown in the supplementary materials). Specifically, in our experiments we used 80, 000 16 16 patches extracted from 8 whitened natural images for training.
Researcher Affiliation Academia Georgia Institute of Technology, Atlanta, GA, 30332 USA Texas A&M University, College Station, TX, 77843 USA kion@gatech.edu, awillats3@gatech.edu, nhliu43@tamu.edu, crozell@gatech.edu
Pseudocode No The paper describes algorithms and mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code available at: https://github.com/siplab-gt/localized-sparse-coding.
Open Datasets Yes The images used in the experiments of this paper were the same as those used in previous work on sparse coding [50].
Dataset Splits Yes Specifically, in our experiments we used 80, 000 16 16 patches extracted from 8 whitened natural images for training. [...] We kept 10% of the training data-set uncompressed for correlation in recovering ΨRM. [...] The validation data-set was built from 20, 000 patches extracted from 2 images not used in training.
Hardware Specification No The paper mentions general computing concepts but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments.
Software Dependencies No The paper mentions software like "fit2d Gabor" and refers to "mathworks.com" for libraries, but it does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Specifically, in our experiments we used 80, 000 16 16 patches extracted from 8 whitened natural images for training. This was broken into batches of size 100, iterated over 150 epochs with decaying step-size on the learning gradient step. To infer coefficients we used λ = 5e 2, chosen experimentally as a value that produces stable learning convergence. These hyper-parameters were held constant across all experiments.