Causality-enhanced Discreted Physics-informed Neural Networks for Predicting Evolutionary Equations
Authors: Ye Li, Siqi Chen, Bin Shan, Sheng-Jun Huang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that the proposed method improves the accuracy of PINNs approximation for evolutionary PDEs and improves efficiency by a factor of 4 40x. The code is available at https://github.com/Siqi Chen9/TL-DPINNs. This section compares the accuracy and training efficiency of the TL-DPINN approach to existing PINN methods using various key evolutionary PDEs, including the Reaction-Diffusion (RD) equation, Allen-Cahn (AC) equation, Kuramoto Sivashinsky (KS) equation, Navier-Stokes (NS) equation. We conduct ablation studies on the relatively simpler RD Eq. and AC Eq. to ablate the main designs in our algorithm. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics 3MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, China |
| Pseudocode | Yes | Algorithm 1: The training procedure of our TLDPINN method |
| Open Source Code | Yes | The code is available at https://github.com/Siqi Chen9/TL-DPINNs. |
| Open Datasets | No | The paper focuses on solving partial differential equations (PDEs) which inherently do not rely on pre-collected datasets in the same way traditional machine learning models do. It explicitly states: 'In contrast, our methods are physicsinformed and do not require additional training data.' Therefore, there is no conventional dataset provided or made publicly available. |
| Dataset Splits | No | The paper solves evolutionary PDEs using physics-informed neural networks, where the 'training' refers to minimizing a physics-based loss function. It does not mention traditional dataset splits (e.g., '70% training, 15% validation, 15% test') or specific sample counts for partitioning data into training, validation, or test sets. |
| Hardware Specification | Yes | We note that all neural networks are trained on an NVIDIA Ge Force RTX 3080 Ti graphics card. |
| Software Dependencies | No | The paper mentions that the code is implemented in JAX and uses the Adam optimizer, but it does not provide specific version numbers for these software dependencies (e.g., 'JAX 0.x.y' or 'Adam version X.Y.Z'). |
| Experiment Setup | Yes | More details about the hyper-parameters of neural networks and the hyper-parameters of Algorithm 1 are presented in Table 2. For the configuration of other five baselines, all of them have a neural network size with the same width and 1 deeper depth than that in Table 2. The collocation points number for all five baselines are configured to be Nt Nr in Table 2. Adam optimizer with an initial learning rate of 0.001 and exponential rate decay is used. |