Symmetric Basis Convolutions for Learning Lagrangian Fluid Mechanics
Authors: Rene Winchenbach, Nils Thuerey
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose a general formulation for continuous convolutions using separable basis functions as a superset of existing methods and evaluate a large set of basis functions in the context of (a) a compressible 1D SPH simulation, (b) a weakly compressible 2D SPH simulation, and (c) an incompressible 2D SPH Simulation. We demonstrate that even and odd symmetries included in the basis functions are key aspects of stability and accuracy. Our broad evaluation shows that Fourier-based continuous convolutions outperform all other architectures regarding accuracy and generalization. |
| Researcher Affiliation | Academia | Rene Winchenbach & Nils Thuerey Physics-based Simulation Group Technical University Munich {rene.winchenbach,nils.thuerey}@tum.de |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | An implementation of our approach, as well as complete datasets and solver implementations, is available at https://github.com/tum-pbs/SFBC. |
| Open Datasets | Yes | An implementation of our approach, as well as complete datasets and solver implementations, is available at https://github.com/tum-pbs/SFBC. |
| Dataset Splits | Yes | Out of these, we use 32 (chosen randomly) as the training set, see Fig. 13, and the remaining 4 as the testing set, see Fig. 12. Overall, we generate 32 training samples, see Fig. 14, and 4 testing samples, see Fig. 15. We start the training with an initial rollout length of 1, which is increased every second epoch by 1, up to a maximum unroll length during training of 10. During training, we compute the L2 difference between ground truth and network prediction for a batch size of 4 without temporal unrolling, where each batch is the result of picking 4 random samples across the entire training dataset where each training timestep is used at-most-once. Our training consists of 5 epochs, each consisting of 1000 weight updates, with an initial learning rate of 10-3 that is halved after every epoch. |
| Hardware Specification | Yes | All measurements were done on a system with an Nvidia RTX A5000 GPU with 24 GiB of VRAM and an Intel Xeon 6242R CPU with 754 GiB of RAM. |
| Software Dependencies | No | We built all of our code, including the SPH simulations, using Py Torch and Py Torch Geometric for graph processing, e.g., neighborhood searches. PyTorch (Paszke et al., 2019) (and related frameworks) or a more direct implementation such as Nvidia’s Cutlass library (Thakkar et al., 2023). |
| Experiment Setup | Yes | For the width of the base functions, we chose n = 6, based on the results from the toy problems and a hidden architecture for the MLP-based approaches of 2 deep and 128 wide, in line with Sanchez-Gonzalez et al. (2020). Our training consists of 5 epochs, each consisting of 1000 weight updates, with an initial learning rate of 10-3 that is halved after every epoch. We utilize 36 random initial conditions to generate our dataset and evaluate 2048 timesteps each. Out of these, we use 32 (chosen randomly) as the training set, see Fig. 13, and the remaining 4 as the testing set, see Fig. 12. |