BrainBits: How Much of the Brain are Generative Reconstruction Methods Using?
Authors: David Mayo, Christopher Wang, Asa Harbin, Abdulrahman Alabdulkareem, Albert Shaw, Boris Katz, Andrei Barbu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We introduce Brain Bits, a method that uses a bottleneck to quantify the amount of signal extracted from neural recordings that is actually necessary to reproduce a method s reconstruction fidelity. We find that it takes surprisingly little information from the brain to produce reconstructions with high fidelity. |
| Researcher Affiliation | Collaboration | 1MIT CSAIL, CBMM 2MIT Lincoln Laboratory 3Google Deep Mind |
| Pseudocode | No | The paper describes methods and processes in text and diagrams but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/czlwang/Brain Bits. Correspondence to DM and CW at {dmayo2, czw}@mit.edu and AH at asaharbin@ll.mit.edu. We release our code, which builds on the publicly available code in [17]. |
| Open Datasets | Yes | This family of methods has been facilitated by the growth of f MRI datasets containing pairs of stimuli and recorded neural data, the current largest of which is the publicly available Natural Scenes Dataset (NSD) [1]. The Natural Scenes Dataset contains f MRI recordings of multiple subjects cumulatively viewing tens of thousands of samples from the Microsoft Co Co dataset [14]. As a result, it is a popular choice for many recent methods [6, 29], and we select it for our analysis. |
| Dataset Splits | No | We train for 100 epochs and use the weights with the best validation loss at test time. (Appendix A.1) |
| Hardware Specification | Yes | All mappings and image generations were computed on two Nvidia Titan RTXs over the course of a week. |
| Software Dependencies | No | The paper mentions using "Adam W optimizer [15]" but does not specify version numbers for any software, libraries, or programming languages. |
| Experiment Setup | Yes | We train our network with a batch size b = 128, an Adam W optimizer [15], a weight decay of wd = 0.1 and a learning rate of lr = 0.01. We train for 100 epochs and use the weights with the best validation loss at test time. (Appendix A.1) |