convSeq: Fast and Scalable Method for Detecting Patterns in Spike Data

Authors: Roman Koshkin, Tomoki Fukai

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method s performance is validated on various synthetic data and real neural recordings, revealing spike sequences with unprecedented scalability and efficiency.
Researcher Affiliation Academia Roman Koshkin 1 Tomoki Fukai 1 1Neural Coding and Brain Computing Unit, Okinawa Institute of Science and Technology, Okinawa, Japan.
Pseudocode Yes Appendix B: Algorithms. Algorithm 1 With minimally constrained filters; Algorithm 2 With parameterized truncated Gaussians.
Open Source Code Yes 1https://github.com/Roman Koshkin/conv-seq
Open Datasets Yes We used a dataset3 from Rubin et al. (2019), which is a recording of CA1 neurons of a mouse running on a linear track and collecting water rewards dispensed at its ends. 3The dataset is available at https://github.com/zivlab/island and represents a binary matrix obtained by thresholding the original Ca2+ imaging data.
Dataset Splits No The paper evaluates performance using metrics like true positive rate, false positive rate, and false negative rate on synthetic and real datasets, but it does not specify explicit train/validation/test dataset splits with percentages, sample counts, or a detailed splitting methodology for reproducibility.
Hardware Specification Yes All the experiments were run on a Linux machine with a 64core AMD EPYC 7702 CPU with 503GB of RAM and an NVIDIA A6000 GPU with 48.67 GB of RAM.
Software Dependencies No The model was implemented in Pytorch (Paszke et al., 2019) and optimized with the Adam (Kingma & Ba, 2014) optimizer.
Experiment Setup Yes The model was implemented in Pytorch (Paszke et al., 2019) and optimized with the Adam (Kingma & Ba, 2014) optimizer with default parameters except the learning rate which was set to 0.1 for faster convergence. For the 2D convolution operation we used no padding in the dimension of neurons and a padding of M//2 zeros in the time dimension... The weights of the total variation and cross-correlation penalty terms as well as other hyperparameters are listed in Appendix A.