EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals
Authors: Xuan-Hao Liu, Yan-Kai Liu, Yansen Wang, Kan Ren, Hanwen Shi, Zilong Wang, Dongsheng Li, Bao-Liang Lu, Wei-Long Zheng
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG)... we develop a large dataset... we propose a novel baseline EEG2Video for video reconstruction from EEG signals. EEG2Video achieves a 2-way accuracy of 79.8% in semantic classification tasks and 0.256 in structural similarity index (SSIM). |
| Researcher Affiliation | Collaboration | 1Shanghai Jiao Tong University 2Microsoft Research Asia 3Shanghai Tech University |
| Pseudocode | Yes | Algorithm 1 Training Stage of EEG2Video Framework |
| Open Source Code | No | Yes, we have provide the code for implementing our experiments and part of the datasets (one data and labels) in the supplementary. However, due to the size limitation and the principle of anonymity, we are unable to upload whole dataset. We will soon publish our dataset and code on our official website of the institute. |
| Open Datasets | Yes | we develop a large EEG dataset, called SJTU EEG Dataset for Dynamic Vision (SEED-DV) dataset, collected from 20 subjects... Yes, we have provide the code for implementing our experiments and part of the datasets (one data and labels) in the supplementary. |
| Dataset Splits | Yes | Specifically, we select each single video block as the testing set one by one, the block before testing set as the validation set, and the remaining 5 blocks compose the training set. |
| Hardware Specification | Yes | All models are implemented with Py Torch and evaluated on an Nvidia A100 GPU. |
| Software Dependencies | No | All models are implemented with Py Torch and evaluated on an Nvidia A100 GPU. Adam optimizer is used with the learning rate η = 0.001. |
| Experiment Setup | Yes | Adam optimizer is used with the learning rate η = 0.001. Batch size is set to 256 for all methods, and the number of training epochs is 100. |