DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection
Authors: Shihao Tu, Linfeng Cao, Daoze Zhang, Junru Chen, Lvbin Ma, Yin Zhang, YANG YANG
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that DMNet significantly outperforms previous SOTAs while maintaining high efficiency on a real-world clinical dataset that we collected, as well as two public datasets for subject-independent seizure detection. |
| Researcher Affiliation | Collaboration | Shihao Tu Zhejiang University shihao.tu@zju.edu.cn; Lvbin Ma Zhejiang Huayun Information Technology Co. Ltd gmmmfly@163.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (e.g., sections or figures labeled "Pseudocode" or "Algorithm"). |
| Open Source Code | Yes | The private dataset so one may cannot reproduce the experiments on the private dataset. However, we also use another two public datasets, and the code of our work is fully provided too, which could help you understand our work. |
| Open Datasets | Yes | To evaluate the performance of our DMNet model, we conduct experiments on both the public benchmark dataset, which includes MAYO and FNUSA [25], and the private clinical dataset (details refer to App. C). To further assess the generalization capability of DMNet on a broader range of subjects with greater heterogeneity, we evaluated the model on data of 179 previously unseen subjects from the large TUSZ EEG dataset [1]. |
| Dataset Splits | Yes | To conduct the experiment under the domain generalization settings, we divide the subjects in the datasets into multiple groups and assign different groups as source and target domains for model training, validation and testing. Specifically, we assign two groups to the training set and one group to the validation set, collectively forming the source domains. For the public dataset, we adopt a 4-1-1 setting for model training, validation, and testing, respectively: randomly choose 5 groups (4 testing groups and 1 validation group) as the source domain, while the other one group as the target domain for testing. |
| Hardware Specification | Yes | All experiments were run on a Linux system with 2 CPUs (AMD EPYC 7H12 64-Core Processor) and 4 GPUs (NVIDIA Ge Force RTX 3090). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. It only mentions the optimizer "Adam" without a version. |
| Experiment Setup | Yes | More experimental details, including model and optimization settings, are listed in Table 2. Table 2 provides specific hyperparameters such as Length of Segment ℓ, Number of Segments N, Number of Clusters K, Base Filter Number Ch, Batch Size, Optimizer (Adam), Learning Rate, Max Epoch, and Valid Metric. |