Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics
Authors: Lukas Prantl, Benjamin Ummenhofer, Vladlen Koltun, Nils Thuerey
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on a range of different, challenging fluid scenarios. Among others, we demonstrate that our approach generalizes to new scenarios with up to one million particles. Our results show that the proposed algorithm can learn complex dynamics while outperforming existing approaches in generalization and training performance. |
| Researcher Affiliation | Academia | Lukas Prantl Technical University of Munich lukas.prantl@tum.de Nils Thuerey Technical University of Munich |
| Pseudocode | No | The paper does not include any figures, blocks, or sections explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | An implementation of our approach is available at https://github.com/tum-pbs/DMCF. |
| Open Datasets | Yes | We primarily use data from a high-fidelity SPH solver with adaptive time stepping [1]. The resulting, two-dimensional dataset 'WBC-SPH' consists of randomly generated obstacle geometries and fluid regions. Gravity direction and strength are additionally varied across simulations. In addition to this primary dataset, we also use the MPM-based fluid dataset 'Water Ramps' from Sanchez-Gonzalez et al. [37], and the three-dimensional liquid dataset 'Liquid3d' from Ummenhofer et al. [48] for additional evaluations. Both consist of randomized fluid regions with constant gravity. A more detailed description of the datasets is provided in App. A.3. |
| Dataset Splits | No | The paper mentions training data and testing data, e.g., 'The training data consists of a different number of fluid particles...' (Section 3, Standing Liquid) and 'We trained and evaluate networks with our WBC-SPH dataset.' (Section 3, Comparisons with Previous Work). However, it does not provide explicit details on train/validation/test splits by percentage, sample counts, or a detailed splitting methodology for reproducibility. |
| Hardware Specification | No | The paper mentions 'vast amounts of computational resources' (Introduction) and provides inference times and speed-ups (Performance section), but does not specify any particular hardware components like CPU or GPU models (e.g., NVIDIA A100, Intel Core i7) used for running the experiments. |
| Software Dependencies | No | The paper does not explicitly provide specific software dependency details, such as programming languages, libraries, or frameworks with their version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1') needed to replicate the experiment. |
| Experiment Setup | No | The paper describes aspects of its training strategy, such as 'For temporal stability, we use a rollout of T frames at training time. That is, for each training iteration, we run the network for T time steps...' and 'We provide the details of the implementation in App. A.2.' (Section 2.4). However, the main text does not explicitly provide concrete hyperparameter values like learning rate, batch size, or number of epochs. |