BENO: Boundary-embedded Neural Operators for Elliptic PDEs
Authors: Haixin Wang, Jiaxin LI, Anubhav Dwivedi, Kentaro Hara, Tailin Wu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 EXPERIMENTS We aim to answer the following questions: (1) Compared with existing baselines, can BENO learn the solution operator for elliptic PDEs with complex geometry and inhomogeneous boundary values? (2) Can BENO generalize to out-of-distribution boundary geometries and boundary values, and different grid resolutions? (3) Are all components of BENO essential for its performance? |
| Researcher Affiliation | Academia | Haixin Wang1, , Jiaxin Li2, , Anubhav Dwivedi3, Kentaro Hara3, Tailin Wu2, 1National Engineering Research Center for Software Engineering, Peking University, 2Department of Engineering, Westlake University, 3Department of Astronautics and Aeronautics, Stanford University |
| Pseudocode | Yes | Algorithm 1 Learning Algorithm of the proposed BENO |
| Open Source Code | Yes | Our source code can be found https://github.com/AI4Science-Westlake U/beno.git. |
| Open Datasets | No | For elliptic PDEs simulations, we construct five different datasets with inhomogeneous boundary values, including 4/3/2/1-corner squares and squares without corners. Each dataset consists of 1000 samples with randomly initialized boundary shapes and values, with 900 samples used for training and validation, and 100 samples for testing. Details on data generation are provided in Appendix C. (The paper describes how the dataset was constructed but does not provide a direct link or specific access information to the generated dataset files.) |
| Dataset Splits | Yes | Each dataset consists of 1000 samples with randomly initialized boundary shapes and values, with 900 samples used for training and validation, and 100 samples for testing. |
| Hardware Specification | Yes | All experiments are based on Py Torch (Paszke et al., 2019) and Py Torch Geometric (Fey & Lenssen, 2019) on 2 NVIDIA A100 GPUs (80G). |
| Software Dependencies | No | All experiments are based on Py Torch (Paszke et al., 2019) and Py Torch Geometric (Fey & Lenssen, 2019) on 2 NVIDIA A100 GPUs (80G). (Specific version numbers for these software libraries are not explicitly stated, only references to their initial publications.) |
| Experiment Setup | Yes | We use Adam (Kingma & Ba, 2014) optimizer with a weight decay of 5e-4 and a learning rate of 5e-5 obtained from grid search for all experiments. Each experiment is trained for 1000 epochs... Table 11: Hyper-parameters Configuration (includes Epochs 1000, Learning Rate 5e-05, MLP Layers in Eq. 7 3, Message Passing Steps T 5, Transformer Layers 1, Number of Attention Head 2, etc.) |