Multi-Tier Platform for Cognizing Massive Electroencephalogram

Authors: Zheng Chen, Lingwei Zhu, Ziwei Yang, Renyuan Zhang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental From experiment results, our platform achieves the general cognition overall accuracy of 87% by leveraging sole EEG, which is 2% superior to the state-of-the-art. Moreover, our developed multitier methodology offers visible and graphical interpretations of the temporal characteristics of EEG by identifying the critical episodes, which is demanded in neurodynamics but hardly appears in conventional cognition scenarios.
Researcher Affiliation Academia 1Osaka University, Japan 2Nara Institute of Science and Technology, Japan
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Details of the experiments are available in our online Appendix*. *https://doi.org/10.6084/m9.figshare.19682961.v1
Open Datasets Yes Specifically, we compare the proposed platform against stateof-the-art algorithms on several authoritative datasets: (i) Sleep Heart Health Study (SHHS) Database; (ii) Sleep-EDF Database. The SHHS dataset is the largest public sleep dataset comprising 42,560 hours recorded from 5,793 subjects [Chen et al., 2021].
Dataset Splits No The paper mentions using datasets and performing experiments but does not explicitly provide details about training, validation, and test splits (e.g., percentages or counts of samples for each split).
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper states "All results are averaged over 10 random seeds" but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configuration.