PAPM: A Physics-aware Proxy Model for Process Systems

Authors: Pengwei Liu, Zhongkai Hao, Xingyu Ren, Hangjie Yuan, Jiayang Ren, Dong Ni

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through systematic comparisons with state-of-the-art pure datadriven and physics-aware models across five twodimensional benchmarks in nine generalization tasks, PAPM notably achieves an average performance improvement of 6.7%, while requiring fewer FLOPs, and just 1% of the parameters compared to the prior leading method.
Researcher Affiliation Academia 1Zhejiang University, Hangzhou, Zhejiang, China 2Tsinghua University, Beijing, China 3University of British Columbia, Vancouver, BC, Canada.
Pseudocode Yes The Appendix D.1 provides the pseudo-code for the entire training process, offering a comprehensive understanding of our approach. As shown in Alg. 1, the structure-preserved localized operator is detailed. The latter is shown in Alg. 2, and the third one, the hybrid operator, is a combination of these two operators.
Open Source Code Yes Code is available at https://github.com/pengwei07/PAPM.
Open Datasets Yes RD2d (Takamoto et al., 2022) ... This dataset can be downloaded at https://github.com/pdebench/PDEBench
Dataset Splits Yes For C Int., the data is uniformly shuffled and then split into training, validation, and testing datasets in a [7 : 1 : 2] ratio.
Hardware Specification Yes all experiments are run on 1 3 NVIDIA Tesla P100 GPUs.
Software Dependencies No The paper mentions training models with "Adam W (Loshchilov & Hutter, 2017) optimizer" but does not specify software versions for programming languages, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version).
Experiment Setup Yes We train all models with Adam W (Loshchilov & Hutter, 2017) optimizer with the exponential decaying strategy, and epochs are set as 500. The causality parameter α1 = 0.1 and α0 = 0.001. The initial learning rate is 1e-3, and the Reduce LRon Plateau schedule is utilized with a patience of 20 epochs and a decay factor of 0.8. For a fair comparison, the batch size is identical across all methods for the same task