MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity

Authors: Lynn Le, Paolo Papale, Katja Seeliger, Antonio Lozano, Thirza Dado, Feng Wang, Pieter Roelfsema, Marcel A. J. van Gerven, Yağmur Güçlütürk, Umut Güçlü

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we reconstruct naturalistic images directly from macaque brain signals using a convolutional neural network (CNN) based decoder. We investigate the ability of this CNN-based decoding technique to differentiate among neuronal populations from areas V1, V4, and IT, revealing distinct readout characteristics for each. Our results demonstrate high-precision reconstructions of naturalistic images, highlighting the efficiency of CNN-based decoders in advancing our knowledge of how the brain s representations translate into pixels.
Researcher Affiliation Academia Lynn Le1, Paolo Papale 2, Katja Seeliger3, Antonio Lozano2, Thirza Dado1, Feng Wang 2, Pieter Roelfsema 2,4,5,6, Marcel van Gerven1, Ya gmur Güçlütürk1, Umut Güçlü 1 1 Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands 2 Netherlands Institute for Neuroscience, Amsterdam, Netherlands 3 Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany 4 Centre for Neurogenomics and Cognitive Research, Vrije Universiteit, Amsterdam, Netherlands 5 Institut de la Vision, Paris, France 6 Amsterdam University Medical Center, Amsterdam, Netherlands
Pseudocode No The paper describes model architectures and training procedures in text and diagrams (e.g., Figure 5, 6, Appendix A.1 U-Net architecture), but it does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes Source code is available on our Git Hub repository1. 1https://github.com/neuralcodinglab/Monkey See
Open Datasets Yes We used images from the THINGS database [19], containing high-resolution images across various object categories. [19] Martin N Hebart, Oliver Contier, Lina Teichmann, Adam H Rockter, Charles Y Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, and Chris I Baker. Things-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. Elife, 12:e82580, 2023. The checklist also states: 'A data manuscript for the THINGS macaque visual cortex dataset is currently in preparation (see https://things-initiative.org)' and 'The stimulus dataset is publicly available and properly referenced and credited.'
Dataset Splits No The dataset comprised 22,348 training samples and 100 test samples, which were exclusively used for testing and never during training. While Section 3.3.4 states 'Early stopping is applied based on validation set performance to prevent overfitting', the paper does not specify the size, percentage, or specific methodology for creating this validation set from the given training samples.
Hardware Specification Yes Training spanned 50 epochs on a Quadro RTX 6000 GPU, utilizing approximately 10,000 Mi B of GPU memory.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'VGG-19 network', 'U-Net architecture', but it does not specify any version numbers for these libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used (e.g., Python version), which are necessary for reproducible software dependencies.
Experiment Setup Yes We used the Adam optimizer with a learning rate of 0.002 and beta coefficients of 0.5 and 0.999 to ensure convergence. The loss function included discriminator loss (αdiscr) at 0.01, VGG feature loss (βvgg) at 0.9, and L1 pixel-wise loss (βpix) at 0.09 to balance sensitivity. Training spanned 50 epochs on a Quadro RTX 6000 GPU, utilizing approximately 10,000 Mi B of GPU memory.