Impression learning: Online representation learning with synaptic plasticity
Authors: Colin Bredenberg, Benjamin Lyo, Eero Simoncelli, Cristina Savin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that learning can be implemented online, is capable of capturing temporal dependencies in continuous input streams, and generalizes to hierarchical architectures. Furthermore, we demonstrate both analytically and empirically that the algorithm is more data-efficient than a three-factor plasticity alternative, enabling it to learn statistics of high-dimensional, naturalistic inputs. |
| Researcher Affiliation | Academia | Colin Bredenberg Center for Neural Science New York University cjb617@nyu.edu Benjamin S. H. Lyo Center for Neural Science New York University blyo@nyu.edu Eero P. Simoncelli Center for Neural Science, New York University Flatiron Institute, Simons Foundation eero.simoncelli@nyu.edu Cristina Savin Center for Neural Science, Center for Data Science New York University csavin@nyu.edu |
| Pseudocode | No | The paper provides mathematical derivations and equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code provided at: https://github.com/colinbredenberg/Impression-Learning-Camera-Ready. |
| Open Datasets | Yes | We used the training and test sets of the Free Spoken Digits Dataset [36], which provides audio time series of humans speaking digits 0-9.4...The FSDD is available at https://github.com/Jakobovski/free-spoken-digit-dataset. |
| Dataset Splits | No | The paper mentions using "training and test sets" but does not specify details regarding validation splits, percentages, or the methodology for creating them. |
| Hardware Specification | No | The paper states that the code was run "on an internal cluster" but does not provide any specific details about the hardware used (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions the use of supplementary code but does not provide specific software names with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow). |
| Experiment Setup | Yes | We optimized learning rates for NVI , BP, and IL separately on the lowest dimensional condition by grid search across orders of magnitude (10−2, 10−3, etc.), and found that NVI performed worse over the entire range, while IL and BP showed similar performance (using the negative ELBO loss as a standard). |