Continuous Spatiotemporal Transformer
Authors: Antonio Henrique De Oliveira Fonseca, Emanuele Zappala, Josue Ortega Caro, David Van Dijk
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data. |
| Researcher Affiliation | Academia | 1Interdepartmental Neuroscience Program, Yale University, New Haven, CT, USA 2Department of Computer Science, Yale University, New Haven, CT, USA 3Department of Neuroscience, Yale University, New Haven, CT, USA 4Wu Tsai Institute, Yale University, New Haven, CT, USA 5Department of Internal Medicine (Cardiology), Yale University, New Haven, CT, USA 6Interdepartmental Program in Computational Biology & Bioinformatics, Yale University, New Haven, CT, USA. |
| Pseudocode | Yes | Algorithm 1 Implementation of the Sobolev loss. |
| Open Source Code | Yes | CST1 https://github.com/vandijklab/CST |
| Open Datasets | Yes | This data consists of 500 2D spirals of 100-time points each. Details about the data generation are described in Appendix C and an example of a curve from this dataset is shown in Figure 6. |
| Dataset Splits | Yes | The data was split into 70% of the spirals for training and 30% for validation. |
| Hardware Specification | Yes | All models were trained on an RTX 3090 NVIDIA GPU for up to 150 epochs or until convergence. |
| Software Dependencies | No | No specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) were provided. The paper mentions using 'Pytorch' but without a version number. |
| Experiment Setup | Yes | Both CST and the Transformer have 4 layers, 4 heads, and dmodel=32 (see Table 5 for more details). |