Neural Integro-Differential Equations
Authors: Emanuele Zappala, Antonio H. de O. Fonseca, Andrew H. Moberly, Michael J. Higley, Chadi Abdallah, Jessica A. Cardin, David van Dijk
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test NIDE on several toy and brain activity datasets and demonstrate that NIDE outperforms other models. These tasks include time extrapolation as well as predicting dynamics from unseen initial conditions, which we test on whole-cortex activity recordings in freely behaving mice. |
| Researcher Affiliation | Academia | 1Yale University, New Haven, CT, USA 2 Baylor College of Medicine, Houston, TX, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | We compare NIDE to NODE in a time extrapolation task of calcium imaging recordings in mice passively exposed to a visual stimulus (Lohani et al. 2020). |
| Dataset Splits | No | The paper does not provide specific dataset split information (percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets. It mentions 'training' and 'validation data' in the context of decomposition, but without details on how the split was made. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'torchquad' and 'Py Torch' but does not provide specific version numbers for these software components. |
| Experiment Setup | No | The paper refers to 'Details about the architecture of the model used in each task are provided in Table 2 Zappala et al. (2022)' which is likely in the appendix of the arXiv version of this paper, but specific experimental setup details (like hyperparameters or training configurations) are not provided in the main text itself. |