Extracting computational mechanisms from neural data using low-rank RNNs
Authors: Adrian Valente, Jonathan W. Pillow, Srdjan Ostojic
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here we first demonstrate the consistency of our method and then apply it to two use cases: (i) we reverse-engineer black-box vanilla RNNs trained to perform cognitive tasks, and (ii) we infer latent dynamics and neural contributions from electrophysiological recordings of nonhuman primates performing a similar task. |
| Researcher Affiliation | Academia | Adrian Valente École Normale Supérieure PSL Research University Jonathan W. Pillow Princeton Neuroscience Institute Princeton University Srdjan Ostojic École Normale Supérieure PSL Research University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/adrian-valente/lowrank_inference/ |
| Open Datasets | Yes | We considered here electrophysiological recordings from non-human primates performing a context-dependent decision-making task similar to that studied in previous sections [29]. |
| Dataset Splits | Yes | The quality of fits was quantified by leaving out a random subset of 8 conditions during network inference, and evaluating the R2 of fitted networks on these left-out conditions. |
| Hardware Specification | Yes | All networks were trained using a single Nvidia GPU (RTX 3090, VRAM 24GB). |
| Software Dependencies | No | All networks were implemented in pytorch [34]. While it mentions PyTorch, it does not specify the version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The networks were trained using the Adam optimizer with a learning rate of 10−3 and a batch size of 64. |