Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dual-Balancing for Physics-Informed Neural Networks
Authors: Chenhong Zhou, Jie Chen, Zaifeng Yang, Ching Eng Png
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that DB-PINN achieves significantly superior performance than those popular gradient-based weighting methods in terms of convergence speed and prediction accuracy. Our code and supplementary material are available at https://github.com/chenhong-zhou/DualBalanced-PINNs. ... Extensive experiments are performed on several PDE benchmarks to validate the effectiveness of the proposed method. Experimental results show that DB-PINN outperforms popular gradient statistics-based methods by a significant margin in both convergence rate and prediction accuracy. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR, China 2Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 DB-PINNs to dynamically adjust loss weights. |
| Open Source Code | Yes | Our code and supplementary material are available at https://github.com/chenhong-zhou/DualBalanced-PINNs. |
| Open Datasets | No | The paper describes PDE benchmarks (Klein-Gordon Equation, Wave Equation, Helmholtz Equation) by providing their mathematical formulations and conditions. However, it does not provide concrete access information such as a specific link, DOI, repository name, or formal citation to a pre-existing dataset file or repository. |
| Dataset Splits | No | The paper uses Physics-Informed Neural Networks (PINNs) which sample collocation points from the domain defined by the PDEs, initial conditions, and boundary conditions. It does not describe traditional dataset splits (e.g., specific percentages or sample counts for training/validation/test sets) for pre-collected datasets. |
| Hardware Specification | No | The main text does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running the experiments. It states, "More detailed experimental setups are shown in the supplementary material." but this information is not in the main paper body. |
| Software Dependencies | No | The main text does not provide specific software details, such as library names with version numbers. It states, "More detailed experimental setups are shown in the supplementary material." but this information is not in the main paper body. |
| Experiment Setup | Yes | We use hyperbolic tangent activation functions and the Adam optimizer with a learning rate of 0.001 as default. ... In all cases below, the training process is iterated 10 times with random restarts. |