Spike Distance Function as a Learning Objective for Spike Prediction
Authors: Kevin Doran, Marvin Seifert, Carola A. M. Yovanovich, Tom Baden
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using recordings of chicken and frog retinal ganglion cells responding to visual stimuli, we compare the performance of our approach to that of Poisson models trained with various summation intervals. We show that our approach outperforms the use of Poisson models at spike train inference. |
| Researcher Affiliation | Academia | 1School of Life Sciences, University of Sussex, UK 2School of Engineering and Informatics, University of Sussex, UK 3Institute of Ophthalmic Research, University of Tübingen, Germany. |
| Pseudocode | Yes | Algorithm 1 Spike inference algorithm: predict a spike in every time step then iteratively remove spikes remove if the L2 norm between the candidate and target distance arrays is lower when the spike is not present. Iterate spikes in order of the estimated effect on the error. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository for the methodology described. |
| Open Datasets | Yes | The data came from a 15-minute recording of chicken RGCs exposed to full-field colour noise, recorded by Seifert et al. (2023). |
| Dataset Splits | Yes | The 15 minutes was split according to the ratio (7,2,1) into training, validation and test sets. |
| Hardware Specification | Yes | All training and inference was carried out on a single workstation, with CPU, GPU and RAM specifications: AMD Ryzen 9 5900X CPU, Nvidia RTX 3090 GPU and 128 Gi B RAM. |
| Software Dependencies | Yes | 1-cycle scheduler policy described by Smith and Topin (2019) was used, with three-phase enabled and other options as default values in Pytorch 2.0 s implementation. |
| Experiment Setup | Yes | All models were trained for 80 epochs using the Adam W optimizer with the 1-cycle learning rate policy (Smith, 2017). [...] Maximum learning rate: 5 × 10−4. [...] Batch size: 256. [...] Epochs: 80. [...] Adam W parameters: (β1, β2, eps, weight decay) = (0.9, 0.99, 1 × 10−5, 0.3). |