Learning to Control PDEs with Differentiable Physics
Authors: Philipp Holl, Nils Thuerey, Vladlen Koltun
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on a variety of control tasks in systems governed by advection-diffusion PDEs such as the Navier-Stokes equations. We quantitatively evaluate the resulting sequences on how well they approximate the target state and how much force was exerted on the physical system. Our method yields stable control for significantly longer time spans than alternative approaches. |
| Researcher Affiliation | Collaboration | Philipp Holl Technical University of Munich Vladlen Koltun Intel Labs Nils Thuerey Technical University of Munich |
| Pseudocode | Yes | Algorithm 1: Recursive algorithm computing the prediction refinement. The algorithm is called via Reconstruct[o0, o , absent] to reconstruct a full trajectory from o0 to o . |
| Open Source Code | Yes | A supporting contribution of our work is a differentiable PDE solver called ΦFlow that integrates with Tensor Flow (Abadi et al., 2016) and Py Torch (Paszke et al., 2019). It is publicly available at https://github.com/tumpbs/Phi Flow. |
| Open Datasets | No | We generate training and test datasets for two distinct tasks: flow reconstruction and shape transition. Both datasets have a resolution of 128 128 with the velocity fields being sampled in staggered form (see Appendix A). |
| Dataset Splits | No | The paper mentions 'training' and 'test' datasets, and discusses 'batch sizes' and 'learning rates', but does not provide specific percentages or sample counts for training, validation, or test splits, nor does it refer to standard predefined splits for its custom-generated datasets. |
| Hardware Specification | Yes | All networks were implemented in Tensor Flow (Abadi et al., 2016) and trained using the ADAM optimizer on an Nvidia GTX 1080 Ti. |
| Software Dependencies | No | The paper mentions software like Tensor Flow and Py Torch, and their custom solver ΦFlow, but does not provide specific version numbers for any of these software components. |
| Experiment Setup | Yes | We use batch sizes ranging from 4 to 16. Supervised training of all networks converges within a few minutes, for which we iteratively decrease the learning rate from 10 3 to 10 5. We stop supervised training after a few epochs, comprising between 2000 and 10.000 iterations, as the networks usually converge within a fraction of the first epoch. For training with the differentiable solver, we start with a decreased learning rate of 10 4 since the backpropagation through long chains is more challenging than training with a supervised loss. |