Parameterized Physics-informed Neural Networks for Parameterized PDEs

Authors: Woojin Cho, Minju Jo, Haksoo Lim, Kookjin Lee, Dongeun Lee, Sanghyun Hong, Noseong Park

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental With the extensive empirical evaluation, we demonstrate that P2INNs outperform the baselines both in accuracy and parameter efficiency on benchmark 1D and 2D parameterized PDEs and are also effective in overcoming the known failure modes. ... 4. Evaluation In this section, we test the performance of P2INNs on the benchmark PDE problems: 1D CDR equations and 2D Helmholtz equations, both of which are known to suffer from the failure modes. We first layout our experimental setup and show that P2INNs outperform the baselines with an extensive evaluation.
Researcher Affiliation Collaboration 1Yonsei University 2Arizona State University 3LG CNS 4Texas A&M University-Commerce 5Oregon State University 6KAIST.
Pseudocode No The paper describes its model architecture and training procedure in natural language and figures, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes To benefit the community, the code will be posted online. The source code for our proposed method and the dataset used in this paper are attached.
Open Datasets Yes To benefit the community, the code will be posted online. The source code for our proposed method and the dataset used in this paper are attached.
Dataset Splits No The paper mentions 'collocation points', 'initial points', and 'boundary points' and their counts for training and testing, but it does not specify a separate 'validation' dataset split or its size/proportion.
Hardware Specification Yes We run our evaluation on a machine equipped with Intel Core-i9 CPUs and NVIDIA RTX A6000 and RTX 2080 TI GPUs.
Software Dependencies Yes We implement P2INNs with PYTHON 3.7.11 and PYTORCH 1.10.2 that supports CUDA 11.4.
Experiment Setup Yes For training, we employ Adam optimizers with learning rate of 1e-3. For our method, we set Dp, Dc, and Dg to 4, 3, and 5 respectively. In the loss function in Eq. (6), we set w1, w2, and w3 to 1. We use a hidden vector dimension of 50 for gθc and gθg, and 150 for gθp.