Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Long-Term EEG Partitioning for Seizure Onset Detection
Authors: Zheng Chen, Yasuko Matsubara, Yasushi Sakurai, Jimeng Sun
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three datasets demonstrate that our method can correct misclassifications, achieving 5%-11% classification improvements over other baselines and accurately detecting seizure onsets. |
| Researcher Affiliation | Academia | Zheng Chen1, Yasuko Matsubara1, Yasushi Sakurai1, Jimeng Sun2,3 1SANKEN, Osaka University 2University of Illinois Urbana-Champaign 3Carle Illinois College of Medicine, University of Illinois Urbana-Champaign EMAIL; EMAIL |
| Pseudocode | No | The paper describes the method using mathematical formulations (Eq. 1-6) and a system overview diagram (Figure 2), but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code or provide a link to a code repository. |
| Open Datasets | Yes | We evaluated SODor for the seizure onset (SO) detection task on three real-world datasets. [...] CHB-MIT comprises 844 hours of continuous scalp EEG data from 22 patients, recorded across 22 channels, with a total of 163 seizure episodes. [...] HUH is collected from University of Helsinki, Finland. It consists of scalp 21-channel EEG data of 79 patients, serving the seizure detection task. TUSZ dataset is part of the Temple University Hospital EEG Seizure Corpus. It comprises 5,612 EEG recordings with 3,050 clinically annotated seizures. |
| Dataset Splits | Yes | We divided each dataset into 70%/20%/10% for training, testing, and validation. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware (e.g., GPU, CPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper describes the methodology and uses terms like 'diffusion convolution', 'recurrent neural network', and 'graphical lasso' but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x). |
| Experiment Setup | No | The paper discusses the models and optimization process, but specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings are not explicitly provided in the main text. It mentions parameter search for baselines but not for its own method's training. |