Learning Representations from EEG with Deep Recurrent-Convolutional Neural Networks

Authors: Pouya Bashivan, Irina Rish, Mohammed Yeasin, Noel Codella

ICLR 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation on the cognitive load classification task demonstrated significant improvements in classification accuracy over current state-of-the-art approaches in this field. reducing the classification error from 15.3% (state-of-art on this application) to 8.9%.
Researcher Affiliation Collaboration Pouya Bashivan Electrical and Computer Engineering Department University of Memphis Memphis, TN , USA {pbshivan}@memphis.edu Irina Rish IBM T.J. Watson Research Center Yorktown Heights, NY, USA {rish}@us.ibm.com Mohammed Yeasin Electrical and Computer Engineering Department University of Memphis Memphis, TN , USA {myeasin}@memphis.edu Noel Codella IBM T.J. Watson Research Center Yorktown Heights, NY, USA {nccodell}@us.ibm.com
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code necessary for generating EEG images and building and training the networks discussed in this paper is available online2. 2https://github.com/pbashivan/EEGLearn
Open Datasets No The paper mentions using an EEG dataset acquired during an experiment and reports details in a previous publication (Bashivan et al., 2014), but it does not provide a direct link, DOI, repository name, or explicit statement of public availability for the dataset itself.
Dataset Splits Yes For evaluating the performance of each classifier we followed the leave-subject-out cross validation approach. In each of the 13 folds, all trials belonging to one of the subjects were used as the test set. A number of samples equal to the test set were then randomly extracted from rest of data for validation set and the remaining samples were used as training set.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments were provided in the paper.
Software Dependencies No The paper mentions using 'Lasagne' and 'Adam algorithm' but does not specify version numbers for these software components.
Experiment Setup Yes We trained the recurrent-convolutional network with Adam algorithm (Kingma & Ba, 2015) with a learning factor of 10 3, and decay rate of first and second moments as 0.9 and 0.999 respectively. Batch size was set to 20. 50% dropout was used on the last two fully connected layers. The network parameters converge after about 600 iterations (5 epochs).