Generating Liquid Simulations with Deformation-aware Neural Networks

Authors: Lukas Prantl, Boris Bonev, Nils Thuerey

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of our approach, we showcase our method with several complex examples of flowing liquids with topology changes. Our representation makes it possible to rapidly generate the desired implicit surfaces. We have implemented a mobile application to demonstrate that real-time interactions with complex liquid effects are possible with our approach.
Researcher Affiliation Academia Lukas Prantl, Boris Bonev & Nils Thuerey Department of Computer Science Technical University of Munich Boltzmannstr. 3, 85748 Garching, Germany
Pseudocode Yes ALGORITHM 1: Training the deformation network
Open Source Code No The paper mentions a demo app available on Google Play and supplementary materials available at a TUM publication page, but does not provide a direct link to the source code for the described methodology or state that the code is open-source.
Open Datasets No As training data we generate sets of implicit surfaces from liquid simulations with the FLIP method Bridson (2015). For our 2D inputs, we use single time steps, while our 4D data concatenates 3D surfaces over time to assemble a space-time surface. ... For our two dimensional data set, we use the SDFs extracted from 2D simulations of a drop falling into a basin. ... An overview of the space of 2156 training samples of size 1002 can be found in the supplemental materials.
Dataset Splits Yes we sample the parameter domain with a regular 44 49 grid, which gives us 2156 training samples, of which we used 100 as a validation set.
Hardware Specification Yes Timings were measured on a Xeon E5-1630 with 3.7GHz. ... Table 1: Performance and setup details of our 4D data sets in the Android app measured on a Samsung S8 device.
Software Dependencies No The paper mentions using an ADAM optimizer and the FLIP method for simulation, but does not specify version numbers for these or any other software dependencies like programming languages or libraries.
Experiment Setup Yes To train both networks we use stochastic gradient descent with an ADAM optimizer and a learning rate of 10 3. Training is performed separately for both networks, with typically 1000 steps for fd, and another ca. 9000 steps for fd. Full parameters can be found in App. B, Table 2.