Visual Decoding and Reconstruction via EEG Embeddings with Guided Diffusion

Authors: Dongyang Li, Chen Wei, Shiying Li, Jiachen Zou, Quanying Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this study, we present an end-to-end EEG-based visual reconstruction zero-shot framework... The experimental results indicate that our EEG-based visual zero-shot framework achieves SOTA performance in classification, retrieval and reconstruction...
Researcher Affiliation Academia Dongyang Li1 Chen Wei1 Shiying Li1 Jiachen Zou1 Quanying Liu1 1Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, China {lidy2023, weic3}@mail.sustech.edu.cn liuqy@sustech.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/ncclab-sustech/EEG_Image_decode.
Open Datasets Yes We conducted our experiments on the THINGS-EEG dataset s training set [8, 6]. To verify the versatility of ATM for embedding electrophysiological data, we tested it on MEG data modality using the THINGS-MEG dataset [18].
Dataset Splits Yes We splited the last batch of the original training set as the validation set and selected the best model based on the minimum validation loss over 40 epochs. For fairness, all models hyperparameters were kept consistent. In our study, we compared the performance of different encoders on the within-subject test set and cross-subject (leave-one-subject-out) test set (see Appendix H).
Hardware Specification Yes All experiments can be completed in a single NVIDIA RTX 4090 GPU.
Software Dependencies No The paper mentions the use of 'Adam optimizer [19]' but does not specify version numbers for any software libraries or dependencies (e.g., PyTorch, TensorFlow) required for replication.
Experiment Setup Yes We used the Adam optimizer [19] to train the across-subject model on a set of approximately 496,200 samples, and the within-subject model on a set of about 66,160 samples, with an initial learning rate of 3 \times 10 {-4} and batch sizes of 16 and 1024. Our initial temperature parameter was set to 0.07.