RoPINN: Region Optimized Physics-Informed Neural Networks
Authors: Haixu Wu, Huakun Luo, Yuezhou Ma, Jianmin Wang, Mingsheng Long
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, Ro PINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation. Code is available at this repository: https://github.com/thuml/Ro PINN. and 4 Experiments To verify the effectiveness and generalizability of our proposed Ro PINN, we experiment with a wide range of PDEs, covering diverse physics processes and a series of advanced PINN models. |
| Researcher Affiliation | Academia | Haixu Wu, Huakun Luo, Yuezhou Ma, Jianmin Wang, Mingsheng Long School of Software, BNRist, Tsinghua University, China {wuhx23,luohk19,mayz20}@mails.tsinghua.edu.cn, {jimwang,mingsheng}@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1 Region Optimized PINN (Ro PINN) Input: number of iterations T, number of past iterations T0 retained to estimate the trust region, default region size r, trust region calibration value σ0 = 1, and initial PINN parameters θ0. Output: optimized PINN parameters θT . Initialize an empty buffer to record gradients as g. for t = 0 to T do // Region Optimization with Monte Carlo Approximation Sample points from neighborhood regions: S = {xi + ξi}|S| i=1, xi S, ξi U[0, r Calculate loss function Lt = L (uθt, S ) Update θt to θt+1 with optimizer (Adam [21], L-BFGS [27], etc) to minimize loss function Lt // Trust Region Calibration Record the gradient of parameters gt throughout optimization Update gradient buffer g by adding gt and keeping the latest T0 elements Trust region calibration with σt+1 = σ(g) end for |
| Open Source Code | Yes | Code is available at this repository: https://github.com/thuml/Ro PINN. |
| Open Datasets | Yes | Benchmarks For a comprehensive evaluation, we experiment with four benchmarks: 1D-Reaction, 1D-Wave, Convection and PINNacle [12]. and in references [12] Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, et al. Pinnacle: A comprehensive benchmark of physics-informed neural networks for solving pdes. ar Xiv preprint ar Xiv:2306.08827, 2023. |
| Dataset Splits | No | Table 4 provides Ntrain and Ntest counts for PINNacle benchmarks, but does not explicitly mention a separate validation split. For 1D-Reaction, 1D-Wave, and Convection, sampling points for training and evaluation are mentioned, but not a distinct validation set. |
| Hardware Specification | Yes | All experiments are implemented in Py Torch [34] and trained on a single NVIDIA A100 GPU. |
| Software Dependencies | No | All experiments are implemented in Py Torch [34] and trained on a single NVIDIA A100 GPU. Reference [34] points to 'Pytorch: An imperative style, high-performance deep learning library. In Neur IPS, 2019.', which does not provide a specific version number (e.g., PyTorch 1.x). |
| Experiment Setup | Yes | In Ro PINN (Algorithm 1), we select the multi-iteration hyperparameter T0 from {5, 10} and set the initial region size r = 10−4 for all datasets, where the trust region size will be adaptively adjusted to fit the PDE property during training. For 1D-Reaction, 1D-Wave and Convection, we follow [58] and train the model with L-BFGS optimizer [27] for 1,000 iterations. As for PINNacle, we strictly follow their official configuration [12] and train the model with Adam [21] for 20,000 iterations. Besides, for simplicity and fair comparison, we set the weights of PINN loss as equal, that is λ = 1 in Eq. (2). |