Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers
Authors: Kiwon Um, Robert Brand, Yun (Raymond) Fei, Philipp Holl, Nils Thuerey
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now provide a summary and discussion of our experiments with the different types of PDE interactions for a selection of physical models. Full details of boundary conditions, parameters, and discretizations of all five PDE scenarios are given in App. B. |
| Researcher Affiliation | Academia | Kiwon Um1,2 Robert Brand1 Yun (Raymond) Fei3 Philipp Holl1 Nils Thuerey1 1Technical University of Munich, 2LTCI, Telecom Paris, IP Paris, 3Columbia University |
| Pseudocode | No | No pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | Yes | The source code for this project is available at https://github.com/tum-pbs/Solver-in-the-Loop. |
| Open Datasets | No | The paper mentions generating "a large-scale data set" and "training sets" but does not provide any concrete access information (e.g., URL, DOI, or citation to a publicly available dataset) for these datasets. |
| Dataset Splits | No | The paper states 'For validation, we use data sets generated from the same parameter distribution as the training sets.' but does not specify exact split percentages or sample counts for training, validation, or test data. |
| Hardware Specification | No | The paper mentions 'a CPU-based reference simulation' but does not provide specific hardware details such as GPU or CPU models, processor types, or memory specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library versions or specific solver versions). |
| Experiment Setup | Yes | Our networks typically consist of 10 convolutional layers with 16 features each, interspresed with Re LU activation functions using kernel sizes of 3d and 5d. The networks parameters θ are optimized with a fixed number of steps with an ADAM optimizer [30] and a learning rate of 10 4. |