A Recurrent Neural Circuit Mechanism of Temporal-scaling Equivariant Representation

Authors: Junfeng Zuo, Xiao Liu, Ying Nian Wu, Si Wu, Wenhao Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We simulated the proposed circuit model to test temporal scaling via manipulating the control input s gain (SI. Sec. 4.1, network simulation detail). Changing the input gain α varies neural sequences time scales (Fig. 3A, left), and the actual scaling factor is proportional to the input gain α as predicted by our theory (Fig. 3A, right).
Researcher Affiliation Academia 1Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, IDG/Mc Govern Institute for Brain Research, Center of Quantitative Biology, Peking University. 2Department of Statistics, University of California, Los Angeles. 3Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center. 4O Donnell Brain Institute, UT Southwestern Medical Center.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about the availability of its source code or links to a code repository.
Open Datasets No As a proof of concept, we use a feedforward circuit (modeled as a three-layer perceptron) to transform u[z(t)] into a 2D sequence (x and y coordinates) of hand-written digits, e.g., digit 6 (Fig. 4A-B, see details in SI. Sec. 4.3). The paper does not provide concrete access information (link, DOI, citation) to a publicly available dataset for training.
Dataset Splits No The feedforward circuit was trained via back-propagation by only using the neural response and the hand-written sequence (Fig. 4B) at only one temporal scale. After training, we test whether the whole disentangled circuit can generalize the hand-written 6 sequence at other time scales by manipulating the control input s gain (α in Eq. 23). The paper discusses training and testing concepts but does not provide specific numerical dataset splits (e.g., percentages or counts) or cross-validation details for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the simulations or experiments.
Software Dependencies No The paper does not provide specific software names with version numbers for reproducibility.
Experiment Setup No The paper mentions training a feedforward circuit via back-propagation and refers to SI. Sec. 4.3 for details, but the main text does not provide specific experimental setup details like hyperparameter values or training configurations.