Competitive Physics Informed Networks

Authors: Qi Zeng, Yash Kothari, Spencer H Bryngelson, Florian Tobias Schaefer

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on a Poisson problem show that CPINNs achieve errors four orders of magnitude smaller than the best-performing PINN. We observe relative errors on the order of single-precision accuracy, consistently decreasing with each epoch. Additional experiments on the nonlinear Schrödinger, Burgers , and Allen Cahn equation show that the benefits of CPINNs are not limited to linear problems.
Researcher Affiliation Academia Qi Zeng, Yash Kothari, Spencer H. Bryngelson & Florian Schäfer School of Computational Science and Engineering Georgia Institute of Technology Atlanta, GA 30332, USA {qzeng37@,ykothari3@,shb@,florian.schaefer@cc.}gatech.edu
Pseudocode No The paper presents mathematical formulations of the PINN and CPINN losses and equations, but no structured pseudocode or algorithm blocks.
Open Source Code Yes The code used to produce the experiments described below can be found under github.com/comp-physics/CPINN.
Open Datasets Yes For the experiments in section 3.3, 3.4, 3.5 we use the training and testing data sets from https://github.com/maziarraissi/PINNs/tree/master/main, which are available under the MIT license.
Dataset Splits Yes For the Poisson equation, we use 5 000 training points within the domain [ 2, 2] [ 2, 2], 50 training points on each side of the domain boundary. We randomly selected all training points with Latin Hypercube sampling. For the Allen Cahn equation... we divide the 10 000 points into 10 subsets based on the time coordinate.
Hardware Specification Yes We train both models on an NVIDIA V100 GPU.
Software Dependencies No The paper mentions software like "Adam (Kingma & Ba, 2014)" and "SGD (Ruder, 2016)" by name with citations, but does not specify their version numbers. It mentions specific implementations used, such as "GMRES-based ACGD implementation of Zheng (2020)" and "implementation of Extra Gradient methods available under the MIT license at https://github.com/Gauthier Gidel/Variational-Inequality-GAN", but these are references to repositories/papers, not explicit version numbers for the software dependencies themselves.
Experiment Setup Yes Adam and ACGD both use a learning rate of 10 3, beta values β1 = 0.99 and β2 = 0.99. The ϵ of Adam and ACGD are each set to 10 8 and 10 6, respectively (all parameters following the usual naming conventions). Each network s number of layers and neurons depends on the PDE problem.