PURE: Prompt Evolution with Graph ODE for Out-of-distribution Fluid Dynamics Modeling
Authors: Hao Wu, Changhu Wang, Fan Xu, Jinbao Xue, Chong Chen, Xian-Sheng Hua, Xiao Luo
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various benchmark datasets validate the superiority of the proposed PURE in comparison to various baselines. |
| Researcher Affiliation | Collaboration | 1University of Science and Technology of China, 2University of California, Los Angeles, 3Tencent , 4Terminus Group |
| Pseudocode | Yes | The whole learning algorithm of PURE is summarized in Algorithm 1. |
| Open Source Code | Yes | Our codes are available at https://github.com/easylearningscores/PURE_main. |
| Open Datasets | Yes | We use Prometheus [80] and follow the original setup for environment segmentation. Real-world Data. We employ the ERA5 [23]... The 2D Navier-Stokes equations [45]... The spherical shallow water equations [14]... The 3D reaction-diffusion equations describe the diffusion and reaction of chemicals in space [62]... |
| Dataset Splits | Yes | Out-of-Distribution Generalization: We train the model In Domain environment and test it in Adaptation environment to verify its generalization ability. Training and testing in the In-Domain environment is called w/o OOD experiment, while training in the In-Domain environment and testing in the Adaptation environment is called w/ OOD experiment. |
| Hardware Specification | No | The paper mentions conducting experiments 'on the same machine' but does not provide any specific hardware details such as CPU/GPU models, memory, or processor types. |
| Software Dependencies | No | The paper mentions various models and operators like 'Fourier Neural Operator (FNO) [45]' and 'Vision Transformer (Vi T)-based convolution [10]', but it does not specify software dependencies with version numbers, such as Python versions, deep learning framework versions (e.g., PyTorch, TensorFlow), or CUDA versions. |
| Experiment Setup | No | The paper discusses loss functions and evaluation metrics (MSE) and provides some details on the model architecture in Appendix B. However, it does not explicitly state specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings necessary for reproducing the experimental setup. |