How does PDE order affect the convergence of PINNs?
Authors: Chang hoon Song, Yesom Park, Myungjoo Kang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we present numerical experiments in support of our theoretical claims. |
| Researcher Affiliation | Academia | 1 Research Institute of Mathematics, Seoul National University 2 Department of Mathematics, University of California, Los Angeles 3 Department of Mathematical Sciences, Seoul National University |
| Pseudocode | No | The paper describes its methods mathematically and textually but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] |
| Open Datasets | No | Throughout all experiments, the training collocation points consists of uniform grid and regularization parameters are set to ν1, . . . , νL = 1 and ν = 10. We implement all numerical experiments on a single NVIDIA RTX 3090 GPU. |
| Dataset Splits | No | The paper describes the number of 'training collocation points' and 'boundary points' used but does not explicitly detail dataset splits for validation or testing in the conventional machine learning sense (e.g., held-out portions of data for evaluation). The evaluation is done by comparing against the analytical solutions of the PDEs. |
| Hardware Specification | Yes | We implement all numerical experiments on a single NVIDIA RTX 3090 GPU. |
| Software Dependencies | No | Table 2 mentions 'optimizer(lr)' as 'GD(10-8)' or 'Adam(10-3)' but does not specify version numbers for these optimizers or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | Throughout all experiments, the training collocation points consists of uniform grid and regularization parameters are set to ν1, . . . , νL = 1 and ν = 10. We trained networks with varying widths m, ranging from 102 to 106, for each combination of p and k using GD optimization with a learning rate of 10-8. Experimental details are provided in Appendix D. Table 2 provides experimental settings including optimizer, learning rate, and number of collocation points. |