Predicting Physics in Mesh-reduced Space with Temporal Attention
Authors: XU HAN, Han Gao, Tobias Pfaff, Jian-Xun Wang, Liping Liu
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method outperforms a competitive GNN baseline on several complex fluid dynamics prediction tasks, from sonic shocks to vascular flow. We demonstrate stable rollouts without the need of training noise and show perfectly phase-stable predictions even for very long sequences. |
| Researcher Affiliation | Collaboration | Xu Han Tufts University Xu.Han@tufts.edu Han Gao University of Notre Dame hgao1@nd.edu Tobias Pffaf Deep Mind tob.pfaff@gmail.com Jian-Xun Wang University of Notre Dame jwang33@nd.edu Li-Ping Liu Tufts University Liping.Liu@tufts.edu |
| Pseudocode | Yes | Algorithm 1 Training Process |
| Open Source Code | No | The paper states 'We use the open-source FVM library Open FOAM [20] to conduct all CFD simulations.', but it does not provide any information about the availability of the authors' own code. |
| Open Datasets | No | Three flow datasets, cylinder flow, sonic flow and vascular flow, are used in our numerical experiments... The ground truth datasets are generated by solving the incompressible/compressible NS equations based on the finite volume method (FVM). We use the open-source FVM library Open FOAM [20] to conduct all CFD simulations. |
| Dataset Splits | No | The paper mentions '51 training and 50 test trajectories' and '11 training and 10 test trajectories' but does not explicitly state a validation split. |
| Hardware Specification | No | The paper mentions 'typical computation servers' in A.2 but provides no specific hardware details such as GPU/CPU models, memory, or cloud instance types used for experiments. |
| Software Dependencies | No | The paper mentions 'Open FOAM [20]' but does not provide a version number. Other software components like MLPs, Transformer, GNNs are mentioned generally without specific versions. |
| Experiment Setup | Yes | Node and edge representations in our Graph Nets are vectors of width 128. The node and edge functions (mlpv, mlpe, mlpr) are MLPs with two hidden layers of size 128, and ReLU activation... We use a single layer and four attention heads in our transformer model. The embedding sizes of z for each dataset (cylinder flow, sonic flow, and vascular flow) are 1024, 1024, and 800, respectively. |