DeWave: Discrete Encoding of EEG Waves for EEG to Text Translation

Authors: Yiqun Duan, Jinzhao Zhou, Zhen Wang, Yu-Kai Wang, Chin-teng Lin

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model surpasses the previous baseline (40.1 and 31.7) by 3.06% and 6.34%, respectively, achieving 41.35 BLEU-1 and 33.71 Rouge-F on the Zu Co Dataset. Experiments employ non-invasive EEG signals and data from the Zu Co dataset [16].
Researcher Affiliation Academia Yiqun Duan1 , Jinzhao Zhou1, Zhen Wang2, Yu-Kai Wang1, Chin-Teng Lin1 1Graphene X-UTS HAI Centre, Australian Artificial Intelligence Institute, Faculty of Engineering and Information Technology University of Technology Sydney, Ultimo, NSW 2007 2School of Computer Science, The University of Sydney, Camperdown NSW 2050
Pseudocode No The paper describes the model architecture and processes, but it does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-sourcing the code for the De Wave methodology itself. It mentions using a pre-trained BART model and adapting Wave2Vec, but not making their own implementation publicly available.
Open Datasets Yes De Wave utilize both Zu Co 1.0 [15] and 2.0 [17] for experiments. The reading task s data are divided into the train (80%), development (10%), and test (10%) respectively by 10874, 1387, and 1387 unique sentences with no intersections.
Dataset Splits Yes The reading task s data are divided into the train (80%), development (10%), and test (10%) respectively by 10874, 1387, and 1387 unique sentences with no intersections.
Hardware Specification Yes All models are trained on Nvidia V100 and A100 GPUs.
Software Dependencies No The paper mentions using "BART [21]" and adapting "Wave2Vec [2]" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For the self-supervised decoding for raw waves, we use a learning rate of 5e-4 and a VQ coefficient of 0.25 for training 35 epochs. For training the codex (stage 1), De Wave uses a learning rate of 5e-4 for 35 epochs. For finetuning the translation (stage 2), De Wave uses a learning rate of 5e-6 for 30 epochs. We use the SGD as the optimizer for training all the models.