Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity
Authors: Lu Mi, Richard Xu, Sridhama Prakhya, Albert Lin, Nir Shavit, Aravinthan Samuel, Srinivas C Turaga
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We applied this model to an experimental whole-brain dataset, and found that connectomic constraints enable our LVM to predict the activity of neurons whose activity were withheld significantly better than models unconstrained by a connectome. We explored models with different degrees of biophysical detail, and found that models with realistic conductancebased synapses provide markedly better predictions than current-based synapses for this system. |
| Researcher Affiliation | Academia | Lu Mi1,3, Richard Xu1, Sridhama Prakhya1, Albert Lin2, Nir Shavit3, Aravinthan D.T. Samuel2 , Srinivas C. Turaga1 1 HHMI Janelia Research Campus 2 Harvard University 3 MIT {xur,prakhyas,turagas}@janelia.hhmi.org {albertlin,samuel}@g.harvard.edu {lumi,shanir}@mit.edu |
| Pseudocode | Yes | Below, we have included some pseudocode describing the architecture of our network. Inference_Network ( calcium_fluor , missing_data_mask , s e n s o r y _ i n p u t ) : calcium_input = c o n c a t e n a t e ( calcium_fluor , missing_data_mask ) conv1_out = r e l u ( conv1 ( calcium_input ) ) up1_out = upsample ( conv1_out ) conv2_out = r e l u ( conv2 ( up1_out ) ) up2_out = upsample ( conv2_out ) sensory_conv_out = r e l u ( conv3 ( s e n s o r y _ i n p u t ) ) merged_calcium_sensory = c o n c a t e n a t e ( up2_out , sensory_conv_out ) mean_latent_neuron_voltage = conv4 ( merged_calc_sensory ) s t d _ l a t e n t _ n e u r o n _ v o l t a g e = s o f t p l u s ( conv5 ( merged_calcim_sensory ) ) s a m p l e _ l a t e n t _ n e u r o n _ v o l t a g e = mean_latent_neuron_voltage + rand_norm * s t d _ l a t e n t _ n e u r o n _ v o l t a g e r e t u r n s a m p l e _ l a t e n t _ n e u r o n _ v o l t a g e Generative_Model ( s a m pl e _ l at e n t _ n e u r on _ v o l t a g e , s e n s o r y _ i n p u t ) : # Equation 1 neuron_voltage_dynamics = leaky_integrator_connectome_dynamics ( s a mp l e _ la t e n t_ n e u r on _ v o l t a g e , s e n s o r y _ i n p u t ) # Equation 5 calcium_concentration_dynamics = l e a k y _ i n t e g r a t o r _ c a l c i u m _ m o d e l ( neuron_voltage_dynamics ) # Equation 6 f l u o r e s c e n c e _ t r a c e = n o n l i n e a r _ a f f i n e _ t r a n s f o r m ( calcium_concentration_dynamcis ) r e t u r n f l u o r e s c e n c e _ t r a c e |
| Open Source Code | Yes | We released our software and datasets (https://github.com/ Turaga Lab/wormvae) for reproducibility. |
| Open Datasets | Yes | We applied the CC-LVM to a calcium imaging dataset in which immobilized, pan-neuronally labeled C.elegans were presented with a panel of chemosensory stimuli (2-butanone, 2,3-pentanedione, and Na Cl) [32]. The connectome constraints we applied utilized the anatomical connectivity data from [30]. |
| Dataset Splits | Yes | We tested this hypothesis by performing neuron holdout evaluations, withholding a single bilateral pair of measured neurons from the model during both training and testing. [...] Another evaluation method we performed was to hold out the data from a handful of individual worms, train the model on the remaining worms, and predict the activity of the neurons of the held-out individuals. For each of the 9 model variants, we trained on 15 worms, and tested the model on 6 withheld worms. |
| Hardware Specification | Yes | We trained each of our LVMs on 1 Quadro RTX 8000. |
| Software Dependencies | No | The paper mentions using "Py Torch" but does not specify a version number. It also mentions "Adam" as an optimizer, but not with a version or as a software dependency. |
| Experiment Setup | Yes | We used an initial learning rate of 3e 4, with a learning rate schedular with a step size of 50, and a gamma of 0.5. We also set a gradient clip value of 1. Each model was trained for 300 epochs in which each epoch is one full pass through all the training data. |