Plug-and-Play Domain Adaptation for Cross-Subject EEG-based Emotion Recognition

Authors: Li-Ming Zhao, Xu Yan, Bao-Liang Lu863-870

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the SEED dataset show that our model greatly shortens the calibration time within a minute while maintaining the recognition accuracy, all of which make emotion decoding more generalizable and practicable.
Researcher Affiliation Collaboration Li-Ming Zhao,1 Xu Yan, 3 Bao-Liang Lu 1 2 1 Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China, 200240 2 Center for Brain-Machine Interface and Neuromodulation, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China, 200020 3 Department of Linguistics, University of Washington, Seattle, WA, USA, 98195
Pseudocode Yes Algorithm 1: Plug-and-play domain adaptation
Open Source Code No The paper does not provide any explicit statement or link for open-sourcing the code for the described methodology.
Open Datasets Yes We verify the performance of our PPDA model on SEED (Zheng and Lu 2015), a public affective EEG dataset for emotion recognition.
Dataset Splits Yes In each iteration, we select one subject as the target new subject and the other 14 as the existing source subjects. It should be noted that although we do not use labeled emotion data in the calibration phase, all the 3394 sample points in SEED are labeled. Therefore, in the calibration phase, we take the first T second data as our calibration data after discarding the emotional tag. The calibration time T is set to 45 s.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper states: "The whole model is implemented by PyTorch." However, it does not specify version numbers for PyTorch or any other software components.
Experiment Setup Yes The layer number, the hidden size, and the time step of the LSTM are fixed to 2, 64, and 15, respectively. Emotion classifiers and domain classifier are single-layer fully connected networks with hidden dimensions of 64. The calibration time T is set to 45 s. T For the trade-offs that control the synergy of the loss terms, the parameters are randomly sought, i.e. α {k 10 1|k {1, ..., 9}, β {k 10 4|k {1, ..., 5}, γ {k 10 5|k {1, ..., 3} and δ {k 10 2|k {1, ..., 3}. Adam optimizer is applied as the optimizing function, and the learning rate is selected in {2k 10 4|k [ 5, 5]}.