A Biologically Plausible Neural Network for Slow Feature Analysis

Authors: David Lipshutz, Charles Windolf, Siavash Golkar, Dmitri Chklovskii

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our approach, we test our algorithm on datasets of naturalistic stimuli and reproduce results originally performed in the offline setting.
Researcher Affiliation Academia 1 Center for Computational Neuroscience, Flatiron Institute 2 Department of Statistics, Columbia University 3 Neuroscience Institute, NYU Medical Center
Pseudocode Yes Algorithm 1: Bio-SFA
Open Source Code Yes The evaluation code is available at https://github.com/flatironinstitute/bio-sfa.
Open Datasets Yes We test Bio-SFA on a sequence of natural images. First, a 256-dimensional sequence {zt} was generated by moving a 16 16 patch over 13 natural images from [12] via translations, zooms, and rotations. Following Schönfeld and Wiskott [25], we test a hierarchical 3-layer organization of Bio-SFA modules on the inputs from the Rat Lab framework [24].
Dataset Splits No The paper does not explicitly provide specific train/validation/test dataset splits, percentages, or cross-validation details needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper does not list specific software dependencies with version numbers needed to replicate the experiment.
Experiment Setup No The paper mentions parameters for Algorithm 1 (γ, ε) but does not provide their specific values. It describes the general training strategy ('trained greedily layer-by-layer with weight sharing') and architectural choices but lacks concrete hyperparameters (e.g., learning rate, batch size, number of epochs) or system-level training settings.