BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos
Authors: Eleanor Batty, Matthew Whiteway, Shreya Saxena, Dan Biderman, Taiga Abe, Simon Musall, Winthrop Gillis, Jeffrey Markowitz, Anne Churchland, John P. Cunningham, Sandeep R. Datta, Scott Linderman, Liam Paninski
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate this framework on two different experimental paradigms using distinct behavioral and neural recording technologies. |
| Researcher Affiliation | Academia | Eleanor Batty*, Matthew R Whiteway*, Shreya Saxena, Dan Biderman, Taiga Abe Columbia University erb2180,m.whiteway,ss5513,db3236,ta2507 @columbia.edu Simon Musall Cold Spring Harbor simon.musall@gmail.com Winthrop Gillis Harvard Medical School win.gillis@gmail.com Jeffrey E Markowitz Harvard Medical School jeffrey_markowitz@hms.harvard.edu Anne Churchland Cold Spring Harbor churchland@cshl.edu John Cunningham Columbia University jpc2181@columbia.edu Sandeep Robert Datta Harvard Medical School srdatta@hms.harvard.edu Scott W Linderman Stanford University swl1@stanford.edu Liam Paninski Columbia University liam@stat.columbia.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | A python implementation of our pipeline is available at https://github.com/ebatty/behavenet, which is based on the PyTorch [46], ssm [47], and Test Tube [48] libraries. |
| Open Datasets | Yes | Widefield Calcium Imaging (WFCI) dataset [8, 19]. ... Neuropixels (NP) dataset [9, 18]. |
| Dataset Splits | Yes | Training terminates when MSE on held-out validation data, averaged over the previous 10 epochs, begins to increase. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | A python implementation of our pipeline is available at https://github.com/ebatty/behavenet, which is based on the PyTorch [46], ssm [47], and Test Tube [48] libraries. Specific version numbers for these libraries are not provided. |
| Experiment Setup | Yes | We train the autoencoders by minimizing the mean squared error (MSE) between original and reconstructed frames using the Adam optimizer [39] with a learning rate of 10^-4. Models are trained for a minimum of 500 epochs and a maximum of 1000 epochs. Training terminates when MSE on held-out validation data, averaged over the previous 10 epochs, begins to increase. |