Solving High Frequency and Multi-Scale PDEs with Gaussian Processes
Authors: Shikai Fang, Madison Cooley, Da Long, Shibo Li, Mike Kirby, Shandian Zhe
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluated GP-HM with several benchmark PDEs that have high-frequency and multi-scale solutions. We compared with the standard PINN and several state-of-the-art variants. We compared with spectral methods (Boyd, 2001) that linearly combine a set of trigonometric bases to estimate the solution. We also compared with several other traditional numerical solvers. In all the cases, GP-HM consistently achieves relative L2 errors at 10 3 or 10 4 or even smaller. By contrast, the competing ML based approaches often failed and gave much larger errors. The visualization of the element-wise prediction error shows that GP-HM also recovers the local solution values much better. |
| Researcher Affiliation | Academia | Shikai Fang , Madison Cooley , Da Long , Shibo Li, Robert M. Kirby, Shandian Zhe University of Utah, Salt Lake City, UT 84112, USA {shikai,mcooley,dl932,shibo,kirby,zhe}@cs.utah.edu |
| Pseudocode | No | The paper describes the algorithm in text but does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | Yes | The code is released at https://github.com/ xuangu-fang/Gaussian-Process-Slover-for-High-Freq-PDE. |
| Open Datasets | No | We considered three commonly-used benchmark PDE families in the literature of machine learning solvers (Raissi et al., 2019; Wang et al., 2021b; Krishnapriyan et al., 2021): Poisson, Allen-Cahn and Advection. Following the prior works, we fabricated a series of solutions to thoroughly examine the performance. The details are given in Section B of Appendix. |
| Dataset Splits | No | To solve the PDE, the PINN uses a deep neural network (NN) buθ(x) to model the solution u. It samples Nc collocation points {xj c}Nc j=1 from Ωand Nb points {xj b}Nb j=1 from Ω, and minimizes a loss, ... We will use the grid points on the boundary Ωto fit the boundary conditions and all the grid points as the collocation points to fit the equation. |
| Hardware Specification | Yes | We examined the running time on a Linux workstation with NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | We implemented our method with JAX (Frostig et al., 2018) while all the competing ML based solvers with Pytorch (Paszke et al., 2019). |
| Experiment Setup | Yes | For all the kernels, we initialized the length-scale to 1. For the Matérn kernel (component), we chose ν = 5/2. For our method, we set the number of components Q = 30, and initialized each wq = 1/Q. For 1D Poisson and 1D Allen-cahn equations, we varied the 1D mesh points from 400, 600 and 900. For 2D Poisson, 2D Allen-cahn and 1D advection, we varied the mesh from 200 200, 400 400 and 600 600. We chose an ending frequency F from {20, 40, 100}, and initialize uq s with linspace(0, F, Q). We used ADAM for optimization, and the learning rate was set to 10 2. The maximum number of iterations was set to 1M, and we used the summation of the boundary loss and residual loss less than 10 6 as the stopping condition. The solution estimate U was initialized as zero. We set the λb = 500. |