A Local-Ascending-Global Learning Strategy for Brain-Computer Interface
Authors: Dongrui Gao, Haokai Zhang, Pengrui Li, Tian Tang, Shihong Liu, Zhihong Zhou, Shaofei Ying, Ye Zhu, Yongqing Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed LAG strategy is validated using datasets related to fatigue (SEED-VIG), emotion (SEED-IV), and motor imagery (BCI C IV 2a). The results demonstrate the generalizability of LAG, achieving satisfactory outcomes in independent-subject experiments across all three datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science, Chengdu University of Information Technology, Chengdu, 610225, China 2 School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China |
| Pseudocode | Yes | Algorithm 1: Training Stage Input: Train set G = {BM} Output: {ypred, Loss} |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code availability for the described methodology. |
| Open Datasets | Yes | This paper presents validation experiments conducted on three EEG datasets (SEED-VIG, SEED-IV, BCI C IV 2a) each associated with distinct cognitive tasks (Gao et al. 2023a; Peng et al. 2023; Zhang et al. 2019). |
| Dataset Splits | No | The paper mentions 'independent-subject experiments' and 'cross-subject comparison results' but does not specify exact split percentages or sample counts for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the 'Adam optimizer' but does not provide specific version numbers for any software dependencies or frameworks used. |
| Experiment Setup | Yes | The graph convolution order is set to 2, and a dropout rate of 0.5 is applied. Model parameters are optimized using the Adam optimizer, with a learning rate search range of [1e-3, 1e-1] and an L2 regularization search range of [5e-3, 3e-1]. |