Scale-invariant Learning by Physics Inversion
Authors: Philipp Holl, Vladlen Koltun, Nils Thuerey
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the capabilities of our method on a variety of canonical physical systems, showing that it yields significant improvements on a wide range of optimization and learning problems. ... We perform an extensive empirical evaluation on a wide variety of inverse problems including the highly challenging Navier-Stokes equations. |
| Researcher Affiliation | Collaboration | Philipp Holl Technical University of Munich Vladlen Koltun Apple Nils Thuerey Technical University of Munich |
| Pseudocode | No | The paper includes a flowchart (Fig. 3) to illustrate the training procedure but does not contain structured pseudocode or an algorithm block. |
| Open Source Code | Yes | All code required to reproduce our results is available at https: //github.com/tum-pbs/SIP. |
| Open Datasets | No | The paper uses synthetic and pseudo-randomly generated data for its experiments (e.g., 'We construct the synthetic two-dimensional inverse process', 'on pseudo-randomly generated y', 'we generate examples x GT'). It does not provide access information for these generated datasets to be publicly available. |
| Dataset Splits | Yes | For all three PDE problems, we train on 10000 training examples and validate on 1000 validation examples. The evaluation is done on the training set after each epoch. The final results are evaluated on a separate test set of 1000 examples (different from the validation set). |
| Hardware Specification | Yes | All experiments are run on an Intel i7-2600 (3.4 GHz) with 32 GB RAM and an NVIDIA RTX 2080 Ti. We use PyTorch [44] with CUDA 11.1 for all training procedures. |
| Software Dependencies | Yes | We use PyTorch [44] with CUDA 11.1 for all training procedures. |
| Experiment Setup | Yes | We use the Adam optimizer [33] with β1 = 0.9, β2 = 0.999, and a learning rate of 0.001. We use a batch size of 64 and train for 500 epochs on Poisson’s equation and 1000 epochs on the heat and Navier-Stokes equations. |