STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers
Authors: Trung Le, Eli Shlizerman
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets, demonstrating its capability to capture autonomous and non-autonomous dynamics spanning different cortical regions while being completely agnostic to the specific behaviors at hand. |
| Researcher Affiliation | Academia | Trung Le University of Washington Seattle, WA tle45@uw.edu Eli Shlizerman University of Washington Seattle, WA shlizee@uw.edu |
| Pseudocode | No | The provided text does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/shlizee/STNDT |
| Open Datasets | Yes | We evaluate our model performance on four neural datasets in the publicly available Neural Latents Benchmark [23]: MC_Maze, MC_RTT, Area2_Bump, and DMFC_RSG. |
| Dataset Splits | Yes | Bayesian hyperparameter tuning: We follow [47] to use Bayesian optimization for hyperparameters tuning. We observe that the primary metrics co-bps are not well correlated with the mask loss (see Figure 1 in the Appendix , while co-bps, vel R2, psth R2 and fp-bps are more pairwise correlated. Therefore, we run Bayesian optimization to optimize co-bps for M models then select the best N models as ranked by validation co-bps, and ensemble them by taking the mean of the predicted rates of these N models. |
| Hardware Specification | No | The main text of the paper does not specify the hardware used for experiments. It defers to the Appendix, which is not provided in the given text. |
| Software Dependencies | No | The provided text does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | Bayesian hyperparameter tuning: We follow [47] to use Bayesian optimization for hyperparameters tuning. ... We used 2 heads for all reported models. ... Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please see Appendix. |