Composing Partial Differential Equations with Physics-Aware Neural Networks

Authors: Matthias Karlbauer, Timothy Praditia, Sebastian Otte, Sergey Oladyshkin, Wolfgang Nowak, Martin V. Butz

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on both oneand two-dimensional PDEs (Burgers , diffusion-sorption, diffusion-reaction, Allen Cahn) demonstrate FINN s superior modeling accuracy and excellent out-of-distribution generalization ability beyond initial and boundary conditions. With only one tenth of the number of parameters on average, FINN outperforms pure machine learning and other state-of-the-art physics-aware models in all cases often even by multiple orders of magnitude.
Researcher Affiliation Academia 1 Neuro-Cognitive Modeling, University of T ubingen, T ubingen, Germany 2 Department of Stochastic Simulation and Safety Research for Hydrosystems, University of Stuttgart, Stuttgart, Germany.
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and data that are used for this paper can be found in the repository https://github.com/ Cognitive Modeling/finn.
Open Datasets No The paper states that synthetic data is 'generated (by conventional numerical simulation)' and experimental data is 'collected from three different core samples'. No links or citations to publicly available datasets used for training are provided.
Dataset Splits Yes For each problem, three different datasets are generated (by conventional numerical simulation): train, used to train the models, in-distribution test (in-dis-test), being the train data simulated with a longer time span to test the models generalization ability (extrapolation), and out-of-distribution test (out-dis-test).
Hardware Specification Yes We compare the runtime for each model, run on a CPU with i9-9900K core, a clock speed of 3.60 GHz, and 32 GB RAM. Additionally, we also perform the comparison of GPU runtime on a GTX 1060 (i.e. with 6 GB VRAM).
Software Dependencies No The paper mentions 'Py Torch' but does not specify a version number. No other software dependencies are listed with specific versions.
Experiment Setup Yes All models are trained with ten different random seeds using Py Torch s default weight initialization. Mean and standard deviation of the prediction errors are summarized in Table 1 for train, in-dis-test and out-dis-test. All models are trained until convergence using the L-BFGS optimizer, except for Phy DNet, which is trained with the Adam optimizer and a learning rate of 1 10 3 due to stability issues when training with the L-BFGS optimizer.