Decoding EEG by Visual-guided Deep Neural Networks

Authors: Zhicheng Jiao, Haoxuan You, Fan Yang, Xin Li, Han Zhang, Dinggang Shen

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Performance of our framework is evaluated and compared with state-of-the-art methods on two public datasets: (1) Image Net subset [Spampinato et al., 2017; Kavasidis et al., 2017]; (2) Face and object [Kaneshiro et al., 2015]. Results listed in this table show that our visual-guided frameworks outperform LDA and LSTM, among which the Res Net101 guided classification method achieves a new state-of-the-art result, with our method improving the performance of the EEG classification stage.
Researcher Affiliation Collaboration 1Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA 2BNRist, KLISS, School of Software, Tsinghua University, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Performance of our framework is evaluated and compared with state-of-the-art methods on two public datasets: (1) Image Net subset [Spampinato et al., 2017; Kavasidis et al., 2017]; (2) Face and object [Kaneshiro et al., 2015].
Dataset Splits Yes The independent separations of EEG signal datasets are 80% for training, 10% for validation, and 10 % for testing.
Hardware Specification Yes In this research, we perform the deep learning experiments on a Titan V graphics card provided by the NVIDIA Academic Program of GPU Grant Program.
Software Dependencies No Our models are based on the deep learning toolkit of Tensor Flow [Abadi et al., 2016]. No version number is specified.
Experiment Setup Yes Training strategy of our classification network is Adam. Structure of our EEG classification net in cognitive domain is the same form as that of Alex Net... Parameters of D and G are listed in Table 3 and Table 4... FCN for the visual-consistent term λ (Lper + Lsem) is pretrained on VOC2012 dataset [Everingham et al., 2010], and λ is set to 0.5.