Energy Guided Diffusion for Generating Neurally Exciting Images
Authors: Pawel Pierzchlewicz, Konstantin Willeke, Arne Nix, Pavithra Elumalai, Kelli Restivo, Tori Shinn, Cate Nealley, Gabrielle Rodriguez, Saumil Patel, Katrin Franke, Andreas Tolias, Fabian Sinz
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this study, we introduce a novel readout architecture inspired by the mechanism of visual attention. This new architecture, which we call attention readout, together with a data-driven convolutional core outperforms previous task-driven models in predicting the activity of neurons in macaque area V4. However, as our predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI s model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across varying model architectures than the state-of-the-art GA, while at the same time reducing computational costs by a factor of 4.7x, facilitating experimentally challenging closed-loop experiments. |
| Researcher Affiliation | Academia | 1Institute for Bioinformatics and Medical Informatics, Tübingen University, Tübingen, Germany 2Institute of Computer Science and Campus Institute Data Science, University of Göttingen, Germany 3Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA 4Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, TX, USA 5Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA ppierzc@cs.uni-goettingen.de |
| Pseudocode | No | No explicitly labeled 'Pseudocode' or 'Algorithm' block was found in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/sinzlab/energy-guided-diffusion |
| Open Datasets | Yes | We use data from 1,244 Macaque V4 neurons from Willeke et al. [24] and briefly summarize their data acquisition in the supplementary materials section A.1. ... A collection of 24,075 images from Image Net [68] was transformed into gray-scale and cropped to the central 420^2 px and had 8 bit intensity resolution. These images were presented as visual stimuli during standalone generation recordings of 1244 units and during closed-loop recordings of 82 units. |
| Dataset Splits | No | The paper mentions 'Training data' and 'set of test images' but does not specify the explicit percentages or counts for training, validation, and test splits needed for reproducibility. |
| Hardware Specification | Yes | All computations were performed on a single consumer-grade GPU: NVIDIA Ge Force RTX 3090 or NVIDIA Ge Force RTX 2080 Ti depending on the availability. |
| Software Dependencies | No | The paper mentions software components like 'SGD optimizer' and 'Adam W optimizer' but does not provide specific version numbers for any libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | For the GA method, we use Gaussian blur preconditioning of the gradient. The stochastic gradient descent (SGD) optimizer was used with a learning rate of 10 and the image was optimized for 1,000 steps. ... We set the energy scale to λ = 10 for the Gaussian model and λ = 5 for the Attention model. ... The diffusion process was run for 100 respaced time steps for the Gaussian model and 50 respaced time steps for the Attention model. For both EGG and GA, we set the norm of the 100 100 image to a fixed value of 25. |