High-dimensional neural spike train analysis with generalized count linear dynamical systems

Authors: Yuanjun Gao, Lars Busing, Krishna V. Shenoy, John P. Cunningham

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our model to data from primate motor cortex and demonstrate performance improvements over state-of-the-art methods, both in capturing the variance structure of the data and in held-out prediction. 4 Experimental results
Researcher Affiliation Academia Yuanjun Gao Department of Statistics Columbia University New York, NY 10027 yg2312@columbia.edu Lars Buesing Department of Statistics Columbia University New York, NY 10027 lars@stat.columbia.edu Krishna V. Shenoy Department of Electrical Engineering Stanford University Stanford, CA 94305 shenoy@stanford.edu John P. Cunningham Department of Statistics Columbia University New York, NY 10027 jpc2181@columbia.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Code can be found at https://bitbucket.org/mackelab/pop_spike_dyn.
Open Datasets Yes We analyze recordings of populations of neurons in the primate motor cortex during a reaching experiment (G20040123), details of which have been described previously [7, 8].
Dataset Splits Yes For each reaching target we use 4-fold cross-validation and the results are averaged across all 14 reaching targets.
Hardware Specification No No specific hardware details (such as GPU/CPU models, memory, or cloud instance types) used for running experiments were mentioned in the paper.
Software Dependencies No No specific software versions or dependencies (e.g., library names with version numbers) were mentioned in the paper.
Experiment Setup No The paper mentions varying the latent dimension 'p from 2 to 8' and initialization methods ('initialize the PLDS using nuclear norm minimization', 'initialize the GCLDS models with the fitted PLDS'), but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, or optimizer settings.