Rescuing neural spike train models from bad MLE
Authors: Diego Arribas, Yuan Zhao, Il Memming Park
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments performed on both real and synthetic neural data validate the proposed approach, showing that it leads to well-behaving models. |
| Researcher Affiliation | Academia | 1Department of Neurobiology and Behavior Center for Neural Circuit Dynamics Stony Brook University, NY, USA 2 Biomedicine Research Institute of Buenos Aires, Argentina |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The following link contains the code used to fit the models https://github.com/diegoarri91/mmd-glm. |
| Open Datasets | Yes | We used two small datasets from monkey ventral premotor cortex (Monkey PMv Figs 3A-H) and human neocortex (Human Cortex Figs. 3I-P) that are prone to yield unstable ML parameters [18, 19]. ... We used a dataset recorded from the lateral intraparietal (LIP) area of a monkey during a perceptual decisionmaking task [17, 21]. |
| Dataset Splits | Yes | We used 50 trials for training the models and 50 for validating. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies (e.g., library or solver names with version numbers) required to replicate the experiments. |
| Experiment Setup | Yes | To determine the weight of the MMD term (α) we tried values on a grid and used the smallest α for which the MMD-GLM samples matched the data firing rate within a 10% interval. To study the variability of the stochastic optimization, we repeated the procedure 20 times and report average values. ... We initialized the coefficients of the history filter at zero and the bias at its MLE value for every optimization. We then minimized NLL + αMMD drawing 100 trials from the model at each optimization step to compute MMD and its gradient. |