Open Vocabulary Electroencephalography-to-Text Decoding and Zero-Shot Sentiment Classification

Authors: Zhenhailong Wang, Heng Ji5350-5358

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our model achieves a 40.1% BLEU1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
Researcher Affiliation Academia Zhenhailong Wang, Heng Ji University of Illinois at Urbana-Champaign {wangz3, hengji}@illinois.edu
Pseudocode No The paper describes its models and pipeline using diagrams (e.g., Figure 2) but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code is made publicly available for research purpose at https://github.com/Mike Wang WZHL/ EEG-To-Text.
Open Datasets Yes We use Zu Co (Hollenstein et al. 2018, 2020) datasets, which contain simultaneous EEG and Eye-tracking data recorded from natural reading tasks.
Dataset Splits Yes And then we split each reading task s data into train, development, test (80%,10%,10%) by unique sentences, that is, the sentences in test set are totally unseen.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using pre-trained models and libraries like BART, BERT, RoBERTa, HuggingFace transformers, and Spacy, but does not provide specific version numbers for these software dependencies.
Experiment Setup No The paper describes some architectural details like the MTE having "6 layers and 8 attention heads" and the objective function, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.