Neural decoding from stereotactic EEG: accounting for electrode variability across subjects
Authors: Georgios Mentzelopoulos, Evangelos Chatzipantazis, Ashwin Ramayya, Michelle Hedlund, Vivek Buch, Kostas Daniilidis, Konrad Kording, Flavia Vitale
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our model is able to decode the trial-wise response time of the subjects during the behavioral task solely from neural data. We also show that the neural representations learned by pretraining our model across individuals can be transferred in a few-shot manner to new subjects. |
| Researcher Affiliation | Academia | 1University of Pennsylvania, 2Stanford University, 3Archimedes, Athena RC |
| Pseudocode | No | The paper describes the architecture and processing steps in text and diagrams, but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | We will make the model and code publicly available as a resource for the community. Project page: https://gmentz.github.io/seegnificant. Our code is available at https://github.com/gmentz/seegnificant. |
| Open Datasets | No | The dataset used in this work contains electrophysiological data collected from human patients. To respect their privacy and comply with HIPAA regulations, we cannot make the dataset public. |
| Dataset Splits | Yes | For all models, the train/validation/test split was 70/15/15 %. |
| Hardware Specification | Yes | All models were trained on a machine with an AMD EPYC 7502P 32-Core Processor and 1 Nvidia A40 GPU with 44.99 Gi B of memory. Table 2: Model inference time on different hardware. Machine CPU GPU AMD EPYC 7502P + Nvidia A40 9.1 5.1 Intel Core i9 + Nvidia A2000 4.0 7.9 |
| Software Dependencies | Yes | All models were implemented and trained using Pytorch 2.1.0+cu121 [Paszke et al., 2019]. The following non-deep learning models were trained on single-subjects, using scikit-learn (version 1.2.2). |
| Experiment Setup | Yes | All models were implemented and trained using Pytorch 2.1.0+cu121 [Paszke et al., 2019]. Adam W was used as the optimizer [Leszczy nski et al., 2020] (with b1 = 0.5 and b2 = 0.999). All models were trained for 1000 epochs. A step learning rate scheduler was used with an initial learning rate set to 10 3 and decayed by a factor of 0.5 every 200 epochs for single-subject models and by a factor of 0.9 every 100 epochs for multi-subject models. Batch size was fixed to 64 and 1024 for all single-subject and multi-subject models, respectively. All models were optimized using Huber loss, except for when finetuning the multi-session, multi-subject model (see section 4.3) to individual subjects, where MSE loss was used. |