PINNACLE: PINN Adaptive ColLocation and Experimental points selection
Authors: Gregory Kang Ruey Lau, Apivich Hemachandra, See-Kiong Ng, Bryan Kian Hsiang Low
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We theoretically show that the criterion used by PINNACLE is related to the PINN generalization error, and empirically demonstrate that PINNACLE is able to outperform existing point selection methods for forward, inverse, and transfer learning problems. |
| Researcher Affiliation | Academia | Gregory Kang Ruey Lau , Apivich Hemachandra , Department of Computer Science, National University of Singapore, Singapore 117417 CNRS@CREATE, 1 Create Way, #08-01 Create Tower, Singapore 138602 {greglau,apivich}@comp.nus.edu.sg See-Kiong Ng & Bryan Kian Hsiang Low Department of Computer Science, National University of Singapore, Singapore 117417 seekiong@nus.edu.sg, lowkh@comp.nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 PINNACLE |
| Open Source Code | Yes | The code has been provided in https://github.com/apivich-h/pinnacle, with exception of PDEBENCH benchmarks which can be found on https://github.com/ pdebench/PDEBench. |
| Open Datasets | Yes | For comparability with other works, we conducted experiments with open-sourced data (Takamoto et al., 2022; Lu et al., 2021), and experimental setups that matches past work (Raissi et al., 2019). The specific PDEs studied and details are in Appendix J.1. |
| Dataset Splits | No | The paper mentions 'training point types' and 'training sets' but does not specify explicit training/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | Yes | In the experiment where we measure the algorithms runtime in forward problem (i.e., experiments ran corresponding to Figure 21a), we have trained each NN using one NVIDIA Ge Force RTX 3080 GPU, and with Intel Xeon Gold 6326 CPU @ 2.90GHz, while for the inverse problem (i.e., experiments ran corresponding to Figures 21b and 21c), we trained each NN using one NVIDIA RTX A5000 GPU and with AMD EPYC 7543 32-Core Processor CPU. |
| Software Dependencies | No | All code were implemented in JAX (Bradbury et al., 2018). This is done so due to its efficiency in performing auto-differentiation. To make DEEPXDE and PDEBENCH compatible with JAX, we made modifications to the experimental code. The specific versions of JAX, DEEPXDE, or other software libraries are not provided. |
| Experiment Setup | Yes | In each experiment, we use a multi-layer perceptron with the number of hidden layers and width dependent on the specific setting. Each experiment uses tanh activation, with some also using LAAF (Jagtap et al., 2020b). The models are trained with the Adam optimizer, with learning rate of 10-4 for Advection and Burger s equation problem settings, and 10-3 for the others. We list the NN architectures used in each of the problem setting below. |