Lagrangian Fluid Simulation with Continuous Convolutions
Authors: Benjamin Ummenhofer, Lukas Prantl, Nils Thuerey, Vladlen Koltun
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our network architecture can simulate different materials, generalizes to arbitrary collision geometries, and can be used for inverse problems. In addition, we demonstrate that our continuous convolutions outperform prior formulations in terms of accuracy and speed. Experimental results indicate that the presented approach outperforms a state-of-the-art graph-based framework (Li et al., 2019). |
| Researcher Affiliation | Collaboration | Benjamin Ummenhofer Intel Labs Lukas Prantl & Nils Thuerey Technical University of Munich Vladlen Koltun Intel Labs |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We will release the code to facilitate such development. Our continuous convolution implementation will be made available as part of Open3D (Zhou et al., 2018). |
| Open Datasets | Yes | We train our fluid simulation network in supervised fashion based on particle trajectories produced by classic ( ground-truth ) physics simulation. We have trained our network on multiple datasets. For quantitative comparisons with prior work we trained our network on the dam break data from Li et al. (2019). The data was generated with Fle X, which is a position-based simulator that targets real-time applications (Macklin et al., 2014). |
| Dataset Splits | No | The paper mentions training and test sets ("We generate 2000 scenes for training and 300 for testing.") but does not explicitly describe a validation set or its split. |
| Hardware Specification | Yes | Training takes about a day for our method with our convolutions on an NVIDIA RTX 2080Ti. Training with PCNN convolutions (Wang et al., 2018) takes about 2 days on 4 GPUs. For KPConvs we used a Quadro RTX 6000 with 24 GB of RAM due to the higher memory requirements. All runtimes were measured on a system with an Intel Core i9-7960 and an NVIDIA RTX 2080Ti. |
| Software Dependencies | Yes | We use the Tensorflow framework for implementing the training procedure. All other weights are initialized with the respective default initializers of Tensorflow version 1.12. |
| Experiment Setup | Yes | We optimize L over 50,000 iterations with Adam (Kingma & Ba, 2015) and a learning rate decay with multiple steps, starting with a learning rate of 0.001 and stopping with 1.56 10 5. We use Adam as optimizer and train with a batch size of 16 and an initial learning rate of 0.001. We half the learning rate at steps 20000, 25000, ..., 45000. For the convolutions we use the random uniform initializer with range [-0.05, 0.05]. The output of the network is scaled with 1 128 to roughly adjust the output range to the ground truth position correction of the training data. For our convolutions, we use spherical filters with an empirically determined radius of R = 4.5h. |