DiffTaichi: Differentiable Programming for Physical Simulation

Authors: Yuanming Hu, Luke Anderson, Tzu-Mao Li, Qi Sun, Nathan Carr, Jonathan Ragan-Kelley, Fredo Durand

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance and productivity of our language in gradient-based learning and optimization tasks on 10 different physical simulators.
Researcher Affiliation Collaboration MIT CSAIL {yuanming,lukea,fredo}@mit.edu Adobe Research {qisu,ncarr}@adobe.com UC Berkeley {tzumao,jrk}@berkeley.edu
Pseudocode Yes @ti.kernel def apply_spring_force(t: ti.i32): # Kernels can have parameters. Here t is a parameter with type int32. for i in range(n_springs): # A parallel for, preferably on GPU a, b = spring_anchor_a[i], spring_anchor_b[i] x_a, x_b = x[t 1, a], x[t 1, b] dist = x_a x_b length = dist.norm() + 1e-4 F = (length spring_length[i]) * spring_stiffness * dist / length # Apply spring impulses to mass points. force[t, a] += -F # += is atomic by default force[t, b] += F
Open Source Code Yes Our language, compiler, and simulator code is open-source. All the results in this work can be reproduced by a single Python script.
Open Datasets No The paper focuses on differentiable programming for physical simulators, which often involve generating data internally or optimizing against internal objectives rather than training on external, publicly available datasets. Therefore, there is no mention of external datasets with specific access information.
Dataset Splits No The paper focuses on differentiable programming for physical simulators, and does not mention training, validation, or test dataset splits in the context of external data.
Hardware Specification Yes Table 1: diffmpm performance comparison on an NVIDIA GTX 1080 Ti GPU.
Software Dependencies No The paper mentions various software frameworks like Tensor Flow, Py Torch, Autograd, JAX, Halide, Enoki, Theano, and Mitsuba 2, often with citations to their original papers. However, it does not provide specific version numbers for the software dependencies used in their own system (Diff Taichi, Taichi, Python, C++).
Experiment Setup Yes Table 2: smoke benchmark against Autograd, Py Torch, and JAX. We used a 110 110 grid and 100 time steps, each with 6 Jacobi pressure projections.