Decoding Natural Images from EEG for Object Recognition

Authors: Yonghao Song, Bingchuan Liu, Xiang Li, Nanlin Shi, Yijun Wang, Xiaorong Gao

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper presents a self-supervised framework to demonstrate the feasibility of learning image representations from EEG signals, particularly for object recognition. The framework utilizes image and EEG encoders to extract features from paired image stimuli and EEG responses. Contrastive learning aligns these two modalities by constraining their similarity. Our approach achieves state-of-the-art results on a comprehensive EEG-image dataset, with a top-1 accuracy of 15.6% and a top-5 accuracy of 42.8% in 200-way zero-shot tasks.
Researcher Affiliation Academia Yonghao Song1, Bingchuan Liu1, Xiang Li1, Nanlin Shi1, Yijun Wang2, Xiaorong Gao1 1 Department of Biomedical Engineering, Tsinghua University 2 Institute of Semiconductors, CAS
Pseudocode Yes Algorithm 1 Natural Image Contrast EEG framework
Open Source Code Yes Code available at https://github.com/eeyhsong/NICE-EEG.
Open Datasets Yes The dataset (Gifford et al., 2022) contains EEG data from ten participants with a time-efficient rapid serial visual presentation (RSVP) paradigm. The training set includes 1654 concepts 10 images 4 repetitions.
Dataset Splits Yes We randomly select 740 trials from training data as the validation set in each run of the code.
Hardware Specification Yes Our method is implemented with Py Torch on a Geforce 4090 GPU.
Software Dependencies No The paper mentions "Py Torch" as the implementation framework but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes Best models are saved when the validation loss reaches a minimum of 200 epochs in the training process. ... It takes about 5 minutes per subject to train with a batch size of 1000... The k in TSConv is set to 40, m1 to 25, m2 to 51, and s to 5 by pre-experiments. Adam optimizer is used with the learning rate, β1 and β2 of 0.0002, 0.5, and 0.999, respectively.