Accelerating PDE Data Generation via Differential Operator Action in Solution Space
Authors: Huanshuo Dong, Hong Wang, Haoyang Liu, Jian Luo, Jie Wang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this chapter, we compare our proposed data generation method with existing data generation methods. Our analysis examines three main performance indicators, which are crucial for evaluating the effectiveness of data generation methods: Accuracy of the data. Time cost of generating data. Errors obtained from training on neural operator models. |
| Researcher Affiliation | Academia | 1CAS Key Laboratory of Technology in GIPAS & Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China. Correspondence to: Jie Wang <jiewangx@ustc.edu.cn>. |
| Pseudocode | Yes | A. Specific pseudocode of GMRES |
| Open Source Code | Yes | Our code is available at https://github.com/ hs-dong/Diff OAS. |
| Open Datasets | Yes | We tested three different types of PDE problems that have important applications in science and engineering ( detailed descriptions are listed in the Appendix B: Darcy Flow Problem (Li et al., 2020) Scalar Wave Equation in Electromagnetism (Zhang et al., 2022) Solute Diffusion in Porous Media (Mauri, 1991) |
| Dataset Splits | No | For constructing the FNO and Deep ONet models, we employed 100 instances of test data generated using GMRES methods, The detailed settings are presented in Appendix B.1. The paper does not explicitly specify training/validation/test dataset splits with percentages or counts, or cross-validation setup. |
| Hardware Specification | Yes | The data generation process was performed on Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, while the model training took place on a Ge Force RTX 3090 GPU with 24GB of memory. |
| Software Dependencies | Yes | We use an existing data generation method based on the GMRES algorithm as the solution and baseline for our study, utilizing scipy 1.11.4 (Virtanen et al., 2020). |
| Experiment Setup | Yes | FNO: We employ 4 FNO layers with learning rate 0.001, batch size 20, epochs 500, modes 12, and width 32. Deep ONet: We utilize branch layers: [50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50] and trunk layers: [50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50], with the activation function set to tanh. The learning rate is 0.001, batch size is 20, and the training process is performed for 500 epochs. |