Space-Time Continuous PDE Forecasting using Equivariant Neural Fields
Authors: David Knigge, David Wessels, Riccardo Valperga, Samuele Papa, Jan-jakob Sonke, Erik Bekkers, Efstratios Gavves
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our framework on different PDEs defined over a variety of geometries in Sec. 4, with differing equivariance constraints, showing competitive performance over other neural PDE solvers.We intend to show the impact of symmetry-preservation in continuous PDE solving. To this end we perform a range of experiments assessing different qualities of our model on tasks with different symmetries. |
| Researcher Affiliation | Academia | 1University of Amsterdam 2 Netherlands Cancer Institute d.m.knigge@uva.nl, d.r.wessels@uva.nl |
| Pseudocode | Yes | See Alg. 1 for pseudocode of the training loop that we use, written for a single datasample for simplicity of notation. |
| Open Source Code | Yes | Code is available on Git Hub. |
| Open Datasets | No | All datasets are obtained by randomly sampling disjoint sets of initial conditions for train and test sets, and solving them using numerical methods. Dataset-specific details on generation can be found in Appx E. The paper describes how to generate the datasets using publicly available tools, but does not provide a direct link or repository for the generated datasets themselves. |
| Dataset Splits | No | All datasets are obtained by randomly sampling disjoint sets of initial conditions for train and test sets, and solving them using numerical methods. The paper describes train and test splits but does not explicitly mention a separate validation split. |
| Hardware Specification | Yes | We run all experiments on a single A100. |
| Software Dependencies | No | For creating the dataset of PDE solutions we used py-pde [54] for Navier-Stokes and the diffusion equation on the plane. For the shallow-water equation and the diffusion equation on the sphere, as well as the internally heated convection in a 3D ball we used Dedalus [10]. The paper mentions software tools used but does not provide specific version numbers for them. |
| Experiment Setup | Yes | We provide hyperparameters per experiment. We optimize the weights of the neural field fθ, and neural ODE Fψ with Adam [23] with a learning rate of 1E-4 and 1E-3 respectively. We initialize the inner learning rate that we use in Meta-SGD [28] for learning zν at 1.0 for p and 5.0 for c. For the neural ODE Fψ, we use 3 of our message passing layers in the architecture specified in [5], with a hidden dimensionality of 128. |