Mesoscopic modeling of hidden spiking neurons
Authors: Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show, on synthetic spike trains, that a few observed neurons are sufficient for neu LVM to perform efficient model inversion of large SNNs, in the sense that it can recover connectivity parameters, infer single-trial latent population activity, reproduce ongoing metastable dynamics, and generalize when subjected to perturbations mimicking optogenetic stimulation. 5 Experimental results |
| Researcher Affiliation | Academia | Laboratory of Computational Neuroscience École polytechnique fédérale de Lausanne (EPFL) |
| Pseudocode | No | The paper describes algorithms such as the Baum-Viterbi algorithm but does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Our implementation of the algorithm is openly available at https://github.com/EPFL-LCN/neuLVM. |
| Open Datasets | No | To build a spiking benchmark dataset, we randomly selected 9 neurons 3 neurons from each of the three populations and considered the spike trains of these neurons as the observed data. The paper generates synthetic data and does not provide a link or specific citation to make it publicly available. |
| Dataset Splits | No | The paper generates synthetic data and a benchmark dataset but does not explicitly provide details on how the data was split into training, validation, and test sets with specific percentages or counts. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or computing cluster specifications) used to run the experiments. |
| Software Dependencies | No | The paper states "Our implementation of the algorithm is openly available at https://github.com/EPFL-LCN/neuLVM" but does not specify any particular software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries). |
| Experiment Setup | No | The paper describes some network parameters and trial durations (e.g., 'J = 60.32 m V', '10 seconds recordings') but does not provide specific hyperparameter values (like learning rate, batch size, epochs for training neu LVM) or comprehensive system-level training settings in a dedicated section. |