Visual Pinwheel Centers Act as Geometric Saliency Detectors

Authors: Haixin Zhong, Mingyi Huang, Wei Dai, Haoyu Wang, Anna Roe, Yuguo Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive exposure to image data, our network evolves from salt-and-peppers to pinwheel structures, with neurons becoming localized bandpass filters responsive to various orientations.
Researcher Affiliation Academia 1. Research Institute of Intelligent Complex Systems, Fudan University. 2. State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University. 3. Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University. 4. MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Key Laboratory of Biomedical Engineering and Instrument Science, Zhejiang University. 5. Shanghai Artificial Intelligence Laboratory.
Pseudocode No The paper describes the model and its dynamics using equations and textual descriptions, but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The data and codes are available on request from the authors.
Open Datasets Yes We use 160 whitened natural images as the training dataset, normalized to zero mean and uniform variance, derived from 20 base images (512 512 pixels) [45, 46]. The ground truth boundary from the BSDS 500 dataset [36] used as binary input represents geometric complexity (edges and curves) (Fig. 3a).
Dataset Splits No The paper uses a "training dataset" (Section 3.1) and refers to evaluating results, but does not explicitly provide details on how the dataset was split into training, validation, and test sets (e.g., percentages, counts, or method of splitting).
Hardware Specification Yes CPU Intel Xeon Gold 6348 CPU @ 2.60GHz GPU A100 Memory 512 GB
Software Dependencies Yes Simulation platform MATLAB R2023a and Python 3.9
Experiment Setup Yes Batch size: 100 Image set: 512 512 160. For the synaptic plasticity, learning rates are ηFF = 0.2 (image to E-neurons), ηEE = 0.01 (Eto E-neurons), ηEI = 0.7 (Ito E-neurons), ηII = 1.5 (Ito I-neurons), and ηIE = 0.7 (Eto I-neurons), while the neural connectivity parameters are αmax,E = 1.0 (Emax weight), αmax,I = 0.5 (Imax weight), σEE = 3.5 (E-E coupling range), σEI = 2.9 (E-I coupling range), σIE = 2.6 (I-E coupling range), and σII = 2.1 (I-I coupling range).