Accelerating Eulerian Fluid Simulation With Convolutional Networks

Authors: Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, Ken Perlin

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present real-time 2D and 3D simulations that outperform recently proposed datadriven methods; the obtained results are realistic and show good generalization properties. ... The paper is organized as follows. We discuss related work in Section 2. In Section 3 we briefly introduce the fluid simulation techniques used in this paper. In Section 4 we present the proposed model. We provide implementation details in Section 5. Experimental results are described in Section 6 and conclusion are drawn in Section 7.
Researcher Affiliation Collaboration 1Google Brain, Mountain View, USA 2New York University, New York, USA 3Google Deepmind, London, UK.
Pseudocode Yes Algorithm 1 Euler Equation Velocity Update 1: Advection and Force Update to calculate u t : 2: (optional) Advect scalar components through ut 1 3: Self-advect velocity field ut 1 4: Add external forces fbody 5: Add vorticity confinement force fvc 6: Set normal component of solid-cell velocities. 7: Pressure Projection to calculate ut: 8: Solve Poisson eqn, 2pt = 1 t u t , to find pt 9: Apply velocity update ut = ut 1 1
Open Source Code Yes Code, data and videos are made available at http://cims.nyu.edu/ schlacht/ CNNFluids.htm.
Open Datasets Yes We provide a public dataset and processing pipeline to procedurally generate random ground-truth fluid frames for evaluation of simulation methods. ... The dataset is public, as well as the code for generating it. ... Code, data and videos are made available at http://cims.nyu.edu/ schlacht/ CNNFluids.htm.
Dataset Splits No The paper describes training and test sets but does not explicitly provide details about a validation dataset split or how it was used.
Hardware Specification Yes We use an NVIDIA Titan X GPU with 12GB of ram and an Intel Xeon E5-2690 CPU.
Software Dependencies No The paper mentions software like Torch7, cuSPARSE, cuBLAS, cudnn, and MantaFlow but does not provide specific version numbers for these dependencies, which is required for reproducible software specification.
Experiment Setup Yes When training our model we step forward either n = 4 steps, with probability 0.9, or n = 25 with probability 0.1. Furthermore we use a random time-step to promote time-step invariance according to t = 1/30 (0.203 + |N (0, 1)|), where N (0, 1) is a random sample from a Normal Gaussian distribution. ... we use Back Propagation (BPROP) to calculate all partial derivatives and the ADAM (Kingma & Ba, 2014) optimization algorithm to minimize the loss.