Separable Physics-Informed Neural Networks

Authors: Junwoo Cho, Seungtae Nam, Hyunmo Yang, Seok-Bae Yun, Youngjoon Hong, Eunbyung Park

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show drastically reduced computational costs (62 in wall-clock time, 1, 394 in FLOPs given the same number of collocation points) in multi-dimensional PDEs while achieving better accuracy.
Researcher Affiliation Academia 1Department of Artificial Intelligence, Sungkyunkwan University 2Department of Mathematics, Sungkyunkwan University 3Department of Mathematical Sciences, KAIST 4Department of Electrical and Computer Engineering, Sungkyunkwan University
Pseudocode No No explicit pseudocode or algorithm block was found.
Open Source Code Yes For visualized results and code, please see https://jwcho5576.github.io/spinn.github.io/.
Open Datasets Yes We obtained the reference solution (101 101 101 resolution) through a widely-used PDE solver platform FEni CS [12].
Dataset Splits No The paper describes using training points (collocation points) and evaluating against reference solutions, but does not specify a separate validation split or its purpose for hyperparameter tuning.
Hardware Specification Yes For 3-d systems, the number of collocation points of 643 was the upper limit for the vanilla PINN (543 for modified MLP) when we trained with a single NVIDIA RTX3090 GPU with 24GB of memory.
Software Dependencies No The paper mentions software like FEniCS [12], JAX-CFD [9], Adam [8], and Deep XDE [13] but does not provide specific version numbers for any of them.
Experiment Setup Yes For our model, we used three body networks of 4 hidden layers with 64/32 hidden feature/output size each. For the baseline model, we used a single MLP of 5 hidden layers with 128 hidden feature sizes. We used Adam [8] optimizer with a learning rate of 0.001 and trained for 50,000 iterations for every experiment. All weight factors λ in the loss function in Eq. 47 are set to 1. The final reported errors are extracted where the total loss was minimum across the entire training iteration. We also resampled the input points every 100 epochs.