A Differentiable Partially Observable Generalized Linear Model with Forward-Backward Message Passing

Authors: Chengrui Li, Weihan Li, Yule Wang, Anqi Wu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments show that our differentiable POGLMs with our forwardbackward message passing produce a better performance on one synthetic and two real-world datasets. Furthermore, our new method yields more interpretable parameters, underscoring its significance in neuroscience.
Researcher Affiliation Academia 1School of Computational Science & Engineering, Georgia Institute of Technology, Atlanta, USA. Correspondence to: Chengrui Li <cnlichengrui@gatech.edu>, Anqi Wu <anqiwu@gatech.edu>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code: https: //github.com/Jerry Soybean/poglm.
Open Datasets Yes We apply various method combinations to analyze a real neural spike train recorded from 27 retinal ganglion neurons while a mouse is engaged in a visual task for approximately 20 minutes (Pillow & Scott, 2012). ... Finally, we apply different method combinations to a dataset obtained from the primary visual cortex (PVC-5) (Chu et al., 2014)1. 1https://crcns.org/data-sets/pvc/pvc-5
Dataset Splits No The paper describes training and testing splits but does not mention a validation set. For example: 'For each trial, we generate 40 spike trains for training and 20 spike trains for testing.' (Synthetic Dataset) and 'We partition the spike train into training and test sets, using the first 2/3 segment for training and the remaining 1/3 segment for testing.' (RGC Dataset).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify any software libraries or dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, etc.) that would be needed to replicate the experiments.
Experiment Setup Yes We utilize the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.05. The optimization process runs for 20 epochs, and within each epoch, optimization is performed using 4 batches, each of size 10. ... The optimization is performed using the Adam optimizer with a learning rate of 0.02. Each training procedure undergoes 20 epochs, employing a batch size of 32.