Efficient Error Certification for Physics-Informed Neural Networks

Authors: Francisco Eiras, Adel Bibi, Rudy R Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate its effectiveness in obtaining tight certificates by applying it to two classically studied PINNs Burgers and Schrödinger s equations , and two more challenging ones with real-world applications the Allan-Cahn and Diffusion-Sorption equations. The aim of this experimental section is to (i) showcase that the Definition 1 certificates obtained with -CROWN are tight compared to empirical errors computed with a large number of samples (Section 5.1), (ii) highlight the relationship of our residual-based certificates and the commonly reported solution errors (Section 5.2), (iii) compare the efficiency of our method to an alternative bound propagation one (Section 5.3), and (iv) qualitatively analyze the importance of greedy input branching in the success of our method (Section 5.4).
Researcher Affiliation Collaboration 1University of Oxford 2Google Deep Mind.
Pseudocode Yes Algorithm 1 Greedy Input Branching Input: function h, input domain C, # splits Nb, # empirical samples Ns, # branches per split Nd Result: lower bound hlb, upper bound hub
Open Source Code No The paper states: “These PINNs were chosen for the experimental section as they are well established from previous literature in the field, and either code or trained models were available from that previous work.” and “we obtain the trained parameters from Takamoto et al. (2022).” It does not, however, include any statement or link indicating that the authors’ own implementation code for the -CROWN framework is publicly available.
Open Datasets No The paper studies Physics-Informed Neural Networks (PINNs) applied to Partial Differential Equations (PDEs), which operate on continuous spatio-temporal domains. While it mentions “randomly sample 10^6 domain points” for empirical analysis, it does not describe using or making publicly available any traditional “datasets” with concrete access information (link, DOI, repository, formal citation for a specific dataset file). Instead, it relies on the definition of the PDEs and pre-trained PINN models from existing literature.
Dataset Splits No The paper focuses on post-training error certification of Physics-Informed Neural Networks (PINNs) and uses pre-trained models from other research. It does not describe or specify any training, validation, or test dataset splits for its own method or the PINNs it certifies, as its scope is on analyzing the continuous applicability domain rather than training new models from scratch or providing data splits for reproduction.
Hardware Specification Yes All timing results were obtained on a Mac Book Pro with a 10-core M1 Max CPU.
Software Dependencies No The paper does not provide specific version numbers for any software, programming languages, or libraries (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x) that were used in its experiments.
Experiment Setup No The paper focuses on a post-training certification framework for Physics-Informed Neural Networks (PINNs) and uses pre-trained models from existing literature. While it mentions some general parameters for its Algorithm 1 (e.g., Nb, Ns, Nd) and runtime, it does not provide specific training hyperparameters (e.g., learning rate, batch size, optimizer details, epochs) for its own method or the pre-trained PINNs it certifies.