Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models
Authors: Yule Wang, Chengrui Li, Weihan Li, Anqi Wu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments to verify the efficacy of Be Ne Diff on a widefield calcium imaging dataset, where a head-fixed mouse performs a visual decision-making task across multiple sessions [Musall et al., 2018; 2019]. The neural subspace in Be Ne Diff exhibits high levels of disentanglement and neural reconstruction quality, as evidenced by multiple quantitative metrics. |
| Researcher Affiliation | Academia | Yule Wang Georgia Institute of Technology Atlanta, GA, 30332 USA yulewang@gatech.edu Chengrui Li Georgia Institute of Technology Atlanta, GA, 30332 USA cnlichengrui@gatech.edu Weihan Li Georgia Institute of Technology Atlanta, GA, 30332 USA weihanli@gatech.edu Anqi Wu Georgia Institute of Technology Atlanta, GA, 30332 USA anqiwu@gatech.edu |
| Pseudocode | Yes | Algorithm 1: Generative Video Diffusion Model for Neural Dynamics Interpretation |
| Open Source Code | Yes | Our codes are available at https://github.com/BRAINML-GT/Be Ne Diff. |
| Open Datasets | Yes | A head-fixed mouse performed a visual decision-making task while neural activity across the dorsal cortex was optically recorded using widefield calcium imaging [Musall et al., 2019; Churchland et al., 2019]. |
| Dataset Splits | No | cv R2 is short for cross-validation coefficient of determination. The higher, the better. ... We conduct experiments to verify the efficacy of Be Ne Diff on a widefield calcium imaging dataset, where a head-fixed mouse performs a visual decision-making task across multiple sessions [Musall et al., 2018; 2019]. |
| Hardware Specification | Yes | The diffusion model is trained on an Nvidia V100, using approximately 20 computer hours. |
| Software Dependencies | No | The paper mentions software components such as Adam Optimizer, Dropout, ReLU activation function, RNN architecture, 3D-UNet, and that gradients are computed using automatic differentiation [Paszke et al., 2017] (implying PyTorch). However, specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | Training Details of Neural LVM. ... We use the Adam Optimizer Kingma and Ba [2014] for optimization and the learning rate is set as 0.001. The batch size is uniformly set to 32. The latent subspace factor number is fixed at 6, which is the same as the number of behaviors of interest. ... Training Details of Video Diffusion Models. ... The embedding input size to the UNet architecture is set as 32 and the UNet has three downsampling and upsampling layers. The diffusion timestep is set as 200. The training batch size is set as 64, with a learning rate of 0.001. We use Group Normalization. |