SNN-PDE: Learning Dynamic PDEs from Data with Simplicial Neural Networks
Authors: Jae Choi, Yuzhou Chen, Huikyo Lee, Hyun Kim, Yulia R. Gel
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our studies of synthetic data and wildfire processes show that SNN PDE improves upon state of the art baselines in handling unstructured grids and irregular time intervals of complex physical systems and offers competitive forecasting capabilities for weather and air quality forecasting. Our experiments on a wide range of synthetic dynamic systems and the wildfire data provided by the National Oceanic and Atmospheric Administration (NOAA) demonstrate that SNN-PDE delivers significant gains upon state-of-the-art baselines in handling unstructured grids and irregular time intervals of complex dynamical systems. |
| Researcher Affiliation | Collaboration | Jae Choi,1 Yuzhou Chen,2 Hugo K. Lee,3 Hyun Kim,4 Yulia R. Gel5,6 1Department of Computer Science, University of Texas at Dallas 2Department of Computer and Information Sciences, Temple University 3Jet Propulsion Laboratory, California Institute of Technology 4Air Resources Laboratory, National Oceanic and Atmospheric Administration 5Department of Mathematical Sciences, University of Texas at Dallas 6National Science Foundation |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available at https: //github.com/SNNPDE/SNN-PDE.git. |
| Open Datasets | Yes | We validate our proposed SNN-PDE model on both synthetic and real-world datasets. On (i) synthetic datasets we focus on dynamical systems on R: (1) the convection-diffusion (Conv Diff) equation, (2) the heat equation, (3) the Burgers equations. For (ii) the real-world dataset, we consider HYSPLIT (Stein et al. 2015)... Furthermore, we select neighbors for each node by applying Delaunay triangulation to the measurement positions and use the Packing of Parallelograms algorithms for network generation. Similarly, we can observe that our SNN-PDE always achieves competitive performances on all 3 datasets which reveals that local and global topological information and higher-order dependencies can enhance the model expressiveness. We also use Stanford bunny retrieved from Point Clean Net database (Turk and Levoy 1994). |
| Dataset Splits | No | The paper specifies details for training and test sets (e.g., "the training set contains 24 simulations and 21 timestamps... and the test set contains 50 simulations and 21 timestamps...") but does not explicitly mention or provide details for a separate 'validation' dataset split or its methodology. |
| Hardware Specification | Yes | We implement SNN-PDE with Pytorch framework on one NVIDIA RTX A5000 GPU card with up to 48GB memory. |
| Software Dependencies | No | The paper mentions using "Pytorch framework" and "adaptive-order implicit Adams solver" but does not specify their version numbers or other software dependencies with version numbers. |
| Experiment Setup | Yes | In our experiments, we utilize the adaptive-order implicit Adams solver with the relative tolerance rtol = 1e-7 and absolute tolerance atol = 1e-5. For synthetic and real-world datasets, we set 200 and 10 epochs as training iterations respectively, and we run all models 5 times and report the mean test accuracy and standard deviation. The learning rate is searched in {1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1}, the embedding dimension of the node embedding dictionary EĎ• is searched in {1, 2, 3, 5, 10}, and the hidden layer dimension nhid is searched in {4, 8, 16, 32, 40, 60}. |