Scaling the Poisson GLM to massive neural datasets through polynomial approximations

Authors: David Zoltowski, Jonathan W. Pillow

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental After validating these estimators on simulated spike train data and a spike train recording from a primate retinal ganglion cell, we demonstrate the scalability of these methods by fitting a fully-coupled GLM to the responses of 831 neurons recorded across five different regions of the mouse brain.
Researcher Affiliation Academia David M. Zoltowski Princeton Neuroscience Institute Princeton University; Princeton, NJ 08544 zoltowski@princeton.edu. Jonathan W. Pillow Princeton Neuroscience Institute & Psychology Princeton University; Princeton, NJ 08544 pillow@prince ton.edu
Pseudocode No The paper describes methods and derivations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes An implementation of pa GLM is available at https://github.com/davidzoltowski/paglm.
Open Datasets Yes We next tested the pa GLM-2 estimator using spike train data recorded from a single parasol retinal ganglion cell (RGC) in response to a full field binary flicker stimulus binned at 8.66 ms [24]. We fit a fully-coupled Poisson GLM to the spiking responses of N = 831 neurons simultaneously recorded from the mouse thalamus, visual cortex, hippocampus, striatum, and motor cortex using two Neuropixels probes [8].
Dataset Splits Yes We held out the first minute as a validation set and used the next 10 minutes to compute the exact and pa GLM-2 MAP estimates with a fixed ridge prior, as hyperparameter optimization was computationally infeasible in the exact MAP case. The fit model had positive spike prediction accuracy for 79.6% of neurons (469 out of 589) whose firing rates were greater than 0.5 Hz in both the training and validation periods.
Hardware Specification No The paper mentions data was recorded using 'Neuropixels probes' [8] but does not provide specific details about the hardware (e.g., GPU/CPU models, RAM) used to perform the computational experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes In our experiments, we implement ridge regression with C 1 = λI, Bayesian smoothing with C 1 = λL where L is the discrete Laplacian operator [16, 14], and automatic relevance determination (ARD) with C 1 ii = λi [10, 22, 18, 25]. We used ridge regression to regularize the weights. We selected the approximation interval using a random subset of the data and optimized the ridge penalty by optimizing the approximate loglikelihood (18). We used a random subset of the training data to select the approximation interval for each neuron and we computed the exact MAP estimates using 50 iterations of quasi-Newton optimization. We placed an ARD prior over each set of 3 coupling weights incoming from other neurons, and optimized the ARD hyperparameters using the fixed-point update equations [2, 18]. We found that sometimes this method under-regularized and therefore we thresholded the prior precisions values from below at 26.