Mutually Regressive Point Processes

Authors: Ifigeneia Apostolopoulou, Scott Linderman, Kyle Miller, Artur Dubrawski

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on single and multi-neuronal spike train recordings. Results demonstrate that the proposed model, unlike existing point process models, can generate biologically-plausible spike trains, while still achieving competitive predictive likelihoods.
Researcher Affiliation Academia Ifigeneia Apostolopoulou Machine Learning Department Carnegie Mellon University iapostol@andrew.cmu.edu Scott Linderman Department of Statistics Stanford University scott.linderman@stanford.edu Kyle Miller Auton Lab Carnegie Mellon University mille856@andrew.cmu.edu Artur Dubrawski Auton Lab Carnegie Mellon University awd@cs.cmu.edu
Pseudocode Yes Algorithm 1 Bayesian Inference for Mutually Regressive Point Processes
Open Source Code Yes The library is written in C++. Our code is available at https://github.com/ifiaposto/ Mutually-Regressive-Point-Processes
Open Datasets Yes We repeat the analysis on two datasets (Figure 2.b and Figure 2.c in [35]) for which PP-GLMs have failed in generating stable spiking dynamics. The data is publicly available and can be downloaded from the NSF-funded CRCNS data repository [51].
Dataset Splits No The paper describes a temporal split for training and testing data ('[0, 13000] msec for learning' and '[13000, 26000] msec for testing') but does not explicitly mention or specify a separate validation dataset split.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'C++' and 'statistical python package Stats Models' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The parameters of the hierarchical prior were set as follows: ντ = 100, ατ = 0.01, βτ = 1, αµ = 0.001, νµ = 100, λµ = 100. (Figure 2 caption), α0 = 0.015, the 2000 burn-in samples, the last 3000 MCMC samples, We adjusted the time discretization interval needed to get the spike counts and the order of the regression ( t = 0.1 msec and Q = 1, respectively).