A mechanistic multi-area recurrent network model of decision-making
Authors: Michael Kleinman, Chandramouli Chandrasekaran, Jonathan Kao
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We trained multi-area RNNs to perform a perceptual decision-making task (Checkerboard Task) and compared their activity to monkey neuron recordings from the dorsal premotor cortex (PMd). |
| Researcher Affiliation | Academia | 1University of California, Los Angeles 2Boston University michael.kleinman@ucla.edu cchandr1@bu.edu kao@seas.ucla.edu |
| Pseudocode | No | The paper defines the RNN equations in Section 3 but does not include any blocks explicitly labeled as “Pseudocode” or “Algorithm”. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it include a link to a code repository. |
| Open Datasets | No | The paper states, “We also thank Krishna V. Shenoy for kindly allowing us to use the data collected by Dr. Chandrasekaran when he was a postdoc in the Shenoy Lab.” This indicates the use of proprietary data obtained with permission, not a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes training multi-area RNNs on a perceptual decision-making task and comparing their activity to monkey neuron recordings, but it does not specify any training, validation, or test dataset splits for either the RNN training or the monkey data analysis. |
| Hardware Specification | Yes | We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. |
| Software Dependencies | No | The paper describes the model architecture and refers to the Adam optimizer, but it does not specify version numbers for any programming languages, libraries, or software components used in the experiments. |
| Experiment Setup | Yes | We tested how robust these results were to architecture and hyperparameter selection. In particular, we found that PMd-like representations emerged when we incorporated anatomical and neurophysiological constraints: Dale s law, empirical levels of feedforward inhibition, and at least 3 areas (Fig. 3a-e). When we varied machine learning hyperparameters, we found that our results were generally robust: multi-area RNNs had PMd-like representations in their last area over a wide range of hyperparameter settings (Fig. 3f). The only exceptions were when the number of units was relatively small, or the learning rate was relatively large. |