Learning Topology-Agnostic EEG Representations with Geometry-Aware Modeling
Authors: Ke Yi, Yansen Wang, Kan Ren, Dongsheng Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments In this section, we first introduce the datasets we used and the overall settings of our experiments. Then, we compared our model with a series of competitive baselines under different settings to demonstrate the strong modeling and transferring ability between different datasets of our proposed MMM framework. We also include ablation studies to investigate the contributions of the designs separately and showcase the effectiveness of the unified representation for EEG modeling. |
| Researcher Affiliation | Collaboration | Ke Yi South China University of Technology cs_kerry@mail.scut.edu.cn Yansen Wang Microsoft Research Asia yansenwang@microsoft.com Kan Ren Microsoft Research Asia kanren@microsoft.com Dongsheng Li Microsoft Research Asia dongsli@microsoft.com |
| Pseudocode | No | The paper describes its methods narratively and with equations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Project link: https://seqml.github.io/MMM/. |
| Open Datasets | Yes | SEED [11] is a discrete EEG emotion dataset elicited by videos. The SEED dataset contains EEG signals of fifteen subjects (eight females and seven males) recorded by 62 EEG sensors while watching fifteen Chinese film clips for three types of emotions. For each subject, the EEG signal recording process is repeated in three different periods corresponding to three sessions. |
| Dataset Splits | No | For each session, the ahead trails are viewed as training samples, and the rest are viewed as testing samples. The training/test split is 9/6 for SEED and 16/8 for SEED-IV, i.e., for the SEED dataset, each subject contains 27 trials as training samples and 18 trials as testing samples. |
| Hardware Specification | Yes | We use 1 NVIDIA V100 GPU to train models for 600 epochs with a batch size of 512. |
| Software Dependencies | No | The paper mentions using 'Adam W' for optimization but does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We use 6 Transformer layers as encoder and symmetrical layers as decoder... The hidden dimension of each channel is set to 16. For multi-dimensional positional encoding, we encode x and y with 8 dimensions, respectively... For global random masking, we pick the masking ratio to 0.5. For regional masking, we pick the ratio of masking region to 0.35... For each 100 epochs, the previous 50 epochs take global random masking, and the rest take region-wise masking... We use 1 NVIDIA V100 GPU to train models for 600 epochs with a batch size of 512... We use 1 NVIDIA V100 GPU to train the model for 100 epochs with a batch size of 32. |