Active learning of neural population dynamics using two-photon holographic optogenetics
Authors: Andrew Wagenmaker, Lu Mi, Marton Rozsa, Matthew Bull, Karel Svoboda, Kayvon Daie, Matthew Golub, Kevin G. Jamieson
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using neural population responses to photostimulation in mouse motor cortex, we demonstrate the efficacy of a low-rank linear dynamical systems model, and develop an active learning procedure which takes advantage of low-rank structure to determine informative photostimulation patterns. We demonstrate our approach on both real and synthetic data, obtaining in some cases as much as a two-fold reduction in the amount of data required to reach a given predictive power. |
| Researcher Affiliation | Academia | Andrew Wagenmaker University of California, Berkeley Lu Mi Georgia Tech Marton Rozsa Allen Institute for Neural Dynamics Matthew S. Bull Allen Institute for Brain Science Karel Svoboda Allen Institute for Neural Dynamics Kayvon Daie Allen Institute for Neural Dynamics Matthew D. Golub University of Washington Kevin Jamieson University of Washington |
| Pseudocode | Yes | Algorithm 1 Active Estimation of Low-Rank Matrices |
| Open Source Code | No | We have not yet released our code but plan to in the future. |
| Open Datasets | No | Neural population activity was recorded in mouse motor cortex using two-photon calcium imaging at 20Hz of a 1mm 1mm field of view (Fo V) containing 500-700 neurons. Each recording spanned approximately 25 minutes and 2000 photostimulation trials... We also hope to release the data we used in the future. |
| Dataset Splits | Yes | We split each of our photostimulation datasets into non-overlapping training and test datasets. All models were trained exclusively using the training dataset and were then evaluated (as shown in Figure 2) using the test dataset. To build our test datasets, we randomly chose 5 (out of the 100 total) unique photostimulation patterns and then included all 70-timestep windows about each of the 20 instances of those 5 unique photostimuli. The resulting test set amounted to 20% of each dataset. |
| Hardware Specification | Yes | Gradient descent was implemented in Py Torch and ran on a single NVIDIA Tesla T4 GPU... Models were implemented with Py Torch, and optimized on a single NVIDIA Tesla T4 GPU... For both sets of experiments in Section 5, we ran on 56 Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPUs. |
| Software Dependencies | No | The paper mentions "Py Torch" and "Suite2p package [82]" and "Adam [83]" but does not specify version numbers for these software dependencies. |
| Experiment Setup | Yes | For the low-rank AR-k models, we fit all parameters via gradient descent using Adam [83] over 100 training epochs with a learning rate of 0.01... Both encoder and decoder GRUs had 512 hidden units. We used Adam optimization with a learning rate of 0.001 over 4000 training epochs of batch size 100. |