Bayesian nonparametric (non-)renewal processes for analyzing neural spike train variability

Authors: David Liu, Mate Lengyel

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental After systematically validating our method on synthetic data, we apply it to two foundational datasets of animal navigation: head direction cells in freely moving mice and hippocampal place cells in rats running along a linear track. Our model exhibits competitive or better predictive power compared to state-of-the-art baselines, and outperforms them in terms of capturing interspike interval statistics.
Researcher Affiliation Academia David Liu Department of Engineering University of Cambridge dl543@cam.ac.uk Máté Lengyel Department of Engineering University of Cambridge Department of Cognitive Science Central European University m.lengyel@eng.cam.ac.uk
Pseudocode No The paper describes the generative model and inference scheme but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes We provide a JAX [4] implementation of our method as well as established baseline models within a scalable general variational inference scheme. Code available at https://github.com/davindicode/nonparametric-nonrenewal-process
Open Datasets Yes We apply it to two foundational datasets of animal navigation: head direction cells in freely moving mice [63, 64] and hippocampal place cells in rats running along a linear track [54].
Dataset Splits Yes Experiments involve fitting to the first half of a dataset (≈18 min. for mouse, ≈32 min. for rat), and testing on the second half split into 5 consecutive segments.
Hardware Specification No The paper does not provide specific details on the hardware used for running the experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions a "JAX [4] implementation" but does not specify its version or other software dependencies with their respective version numbers.
Experiment Setup Yes All datasets discretize spike trains and input time series at regular intervals of Δt = 1 ms. We use a product kernel for k(x, x') with periodic kernels for angular dimensions, and squared exponential kernels in other cases. For k(τ̃, τ̃') and k(ξ̃, ξ̃'), we pick a product kernel with Matérn-3/2 (see Fig. 12 for different kernel choices) and set the maximum ISI lag K = 3. [...] GP inducing points were randomly initialized, and for a fair comparison, all models used 8 inducing points for each covariate dimension (including temporal dimensions τ̃ and ξ̃ in the NPNR process). For each experiment, we repeat model fitting with 3 different random seeds and pick the model with the best training likelihood. Further details on experiments are presented in Appendix C.