Mind Reader: Reconstructing complex images from brain activities

Authors: Sikun Lin, Thomas Sprague, Ambuj K Singh

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments are conducted on one Tesla V100 GPU and one Tesla T4 GPU.
Researcher Affiliation Academia Sikun Lin Thomas Sprague Ambuj K Singh UC Santa Barbara {sikun,tsprague,ambuj}@ucsb.edu
Pseudocode No The paper describes the pipeline and model components with text and diagrams, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code is publicly available.1 1https://github.com/sklin93/mind-reader
Open Datasets Yes In particular, the Natural Scenes Dataset (NSD [3]) was built to meet the needs of data-hungry deep learning models, sampling at an unprecedented scale compared to all prior works while having the highest resolution and signal-to-noise ratio (SNR). In addition, all the images used in NSD are sampled from MS-COCO [22], which has far richer contextual information and more detailed annotations compared to datasets that are commonly used in other f MRI studies (e.g., Celeb A face dataset [23], Image Net [10], self-curated symbols, grayscale datasets).
Dataset Splits Yes The dataset is split image-wise: 23715 samples corresponding to 8364 images are used as the train set, and 4035 samples corresponding to the remaining 1477 images are used as the validation set.
Hardware Specification Yes Our experiments are conducted on one Tesla V100 GPU and one Tesla T4 GPU.
Software Dependencies No The paper mentions using and adapting models like Style GAN2 and Lafite, and refers to external code repositories for these. However, it does not provide specific version numbers for programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow) used in their experimental setup, which is required for reproducible software dependency information.
Experiment Setup Yes Additional experiment settings , including hyperparameters of two training phases, are provided in appendices A.1 and A.2.