Neural SPH: Improved Neural Modeling of Lagrangian Fluid Dynamics
Authors: Artur Toshev, Jonas A. Erbesdobler, Nikolaus A. Adams, Johannes Brandstetter
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we identify particle clustering originating from tensile instabilities as one of the primary pitfalls. Based on these insights, we enhance both training and rollout inference of state-of-the-art GNNbased simulators with varying components from standard SPH solvers, including pressure, viscous, and external force components. All Neural SPH-enhanced simulators achieve better performance than the baseline GNNs, often by orders of magnitude in terms of rollout error, allowing for significantly longer rollouts and significantly better physics modeling. |
| Researcher Affiliation | Collaboration | 1Chair of Aerodynamics and Fluid Mechanics, School of Engineering and Design, Technical University of Munich, Garching, Germany 2Munich Institute of Integrated Materials, Energy and Process Engineering, Technical University of Munich, Germany 3ELLIS Unit Linz, LIT AI Lab, Institute for Machine Learning, Johannes Kepler University, Linz, Austria 4NXAI Gmb H, Austria. |
| Pseudocode | No | The paper describes the methodology in text but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available under https://github.com/tumaer/neuralsph. |
| Open Datasets | Yes | Our analyses are based on the datasets of Toshev & Adams (2024), accompanying the Lagrange Bench paper (Toshev et al., 2024a). |
| Dataset Splits | Yes | In this context, the Lagrange Bench datasets pre-define a split of 50/25/25, which is far from sufficient if we want stable error estimates on rollouts of 400-step length, as also discussed, e.g., in Fu et al. (2023b). |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper mentions implementation in JAX (Bradbury et al., 2018) and JAX-SPH (Toshev et al., 2024b), but does not explicitly list multiple key software components with specific version numbers for reproducibility, such as Python, PyTorch, CUDA, etc. |
| Experiment Setup | Yes | We summarize the used hyperparameters in Table 3 and Appendix B. |