$\bfΦ_\textrmFlow$: Differentiable Simulations for PyTorch, TensorFlow and Jax

Authors: Philipp Holl, Nils Thuerey

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To illustrate how the above features can be used, not only to simulate ground-truth data, but also to solve complex inverse problems, we perform a series of challenging experiments and reimplement experiments from prior work.We provide the full source code including data generation in the corresponding figures and all shown plots are generated with our Matplotlib frontend. Juptyer notebooks containing the source code together with all plotting code are available in the supplemental information (SI), and performance measurements are given in appendix A. A. Performance measurements We benchmark all experiments with the three supported machine learning backends: Py Torch, Tensor Flow and JAX. We always enable just-in-time (JIT) compilation using ΦFlow s @jit compile function decorator. The results are shown in Tab. 1. Overall, the performance gap between the backends is reasonably small, and no library consistently outperforms the others. For fluids and tasks involving random data access, JAX usually yields the best performance, while Py Torch works best for easy-to-parallelize tasks.
Researcher Affiliation Academia 1School of Computation, Information and Technology, Technical University of Munich, Germany.
Pseudocode No The paper contains executable source code examples (e.g., Figures 2, 3, 4, 5, 7), not pseudocode or formally labeled algorithm blocks.
Open Source Code Yes It is available at https: //github.com/tum-pbs/Phi Flow.
Open Datasets No The paper's experiments primarily rely on synthetically generated data within the context of the simulations described (e.g., 'generating a ground truth conductivity C and initial temperature profile T0' in Figure 3; 'We generate a divergence-free 64 64 ground-truth velocity field' in Figure 4; 'The training data consists of corresponding velocity fields at t0 and t1. We generate 10 batches of 10 examples, each' in Figure 5), rather than using or linking to pre-existing public datasets.
Dataset Splits No The paper describes generating synthetic data for its experiments (e.g., 'The training data consists of corresponding velocity fields at t0 and t1. We generate 10 batches of 10 examples, each'), but does not specify any explicit training, validation, or test dataset splits.
Hardware Specification Yes The table shows wall-clock time in ms per step on an NVIDIA RTX 3090 excluding warm-up.
Software Dependencies No The paper mentions software like Py Torch, Tensor Flow, Jax, NumPy, SciPy, Matplotlib, and Plotly as dependencies, but it does not specify any version numbers for these software components.
Experiment Setup Yes The paper includes specific experimental setup details within the provided executable source code examples, such as: 'dt=.25', 'x=256, y=256', 't=100' (Figure 2); 'dt=10', 'x=100, y=40', 'Solve('bi CG-stab(2)')', 'Solve('GD')' (Figure 3); 'dt=.1', 'integrator=advect.rk4', 'x=64, y=64', 'Solve('L-BFGS-B')' (Figure 4); and 'learning_rate=1e-2', 'levels=4', 'epoch in range(10)' (Figure 5).