Scalable Transformer for PDE Surrogate Modeling

Authors: Zijie Li, Dule Shu, Amir Barati Farimani

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase that the proposed model is able to simulate 2D Kolmogorov flow on a 256 256 grid and 3D smoke buoyancy on a 64 64 64 grid with good accuracy and efficiency. The proposed factorized scheme can serve as a computationally efficient low-rank surrogate for the full attention scheme when dealing with multi-dimensional problems.
Researcher Affiliation Academia Zijie Li, Dule Shu, Amir Barati Farimani Carnegie Mellon University Mechanical Engineering Department {zijieli, dules}@andrew.cmu.edu & barati@cmu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code for this project is available at: https://github.com/Barati Lab/Fact Former.
Open Datasets Yes The dataset consists of 100 trajectories for training and 20 trajectories for testing, with the length of each trajectory being 10 seconds and 160 frames.
Dataset Splits No The paper specifies train/test splits for the datasets (e.g., "The dataset consists of 100 trajectories for training and 20 trajectories for testing" for 2D Kolmogorov flow) but does not provide explicit numerical details for a separate validation split or how it was derived.
Hardware Specification Yes The benchmark is carried out using Py Torch 1.8.2 on an RTX 3090, with a batch size of 4.
Software Dependencies Yes All the experiments are carried out using Py Torch 1.8 except for FNO/F-FNO experiment, which uses Py Torch 1.13 for optimizing complex-valued parameters.
Experiment Setup Yes The major hyperparameters are listed in Table 5. Hyperparameter 2D Kolmogorov 3D turbulence 3D smoke 2D Darcy flow Hidden dimension 128 128 128 128 Depth 4 4 3 3 Heads 8 6 6 12 Kernel dimension 128 192 192 128 Input encoder 2D Conv 2D Conv 2D Conv MLP Output decoder MLP MLP MLP MLP