CLIP-MUSED: CLIP-Guided Multi-Subject Visual Neural Information Semantic Decoding

Authors: Qiongyi Zhou, Changde Du, Shengpei Wang, Huiguang He

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed method outperforms single-subject decoding methods and achieves stateof-the-art performance among the existing multi-subject methods on two f MRI datasets.
Researcher Affiliation Academia 1Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences 2University of Chinese Academy of Sciences
Pseudocode No The paper provides mathematical formulations and architectural diagrams but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/CLIP-MUSED/CLIPMUSED.
Open Datasets Yes HCP (Glasser et al., 2013; Van Essen et al., 2012): This dataset is a part of the Human Connectome Project (HCP), containing BOLD signals from 158 subjects.
Dataset Splits Yes For the single-subject decoding task, the training, validation, and test sets consist of 2000, 265, and 699 samples, respectively.
Hardware Specification Yes The models converge after approximately three hours of training on one NVIDIA A100 GPU.
Software Dependencies No The paper mentions 'Adam' as an optimizer and 'Brain IAK' for a baseline method but does not provide specific version numbers for software dependencies used in their own implementation, such as Python, PyTorch, or other libraries.
Experiment Setup Yes The learning rate is set to 0.001, and the batch size is 64, and the optimizer is Adam. We find the optimal values for the hyperparameters λ , λhlv, and λllv by grid-search within the range of [0.001, 0.01, 0.1] and the best values are λ = 0.001, λhlv = 0.001, λllv = 0.1.