Transolver: A Fast Transformer Solver for PDEs on General Geometries

Authors: Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, Mingsheng Long

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on six well-established benchmarks with various geometries and large-scale industrial simulations, where Transolver achieves consistent state-of-the-art with impressive relative gain.
Researcher Affiliation Academia 1School of Software, BNRist, Tsinghua University. Haixu Wu <wuhx23@mails.tsinghua.edu.cn>. Correspondence to: Mingsheng Long <mingsheng@tsinghua.edu.cn>.
Pseudocode No The paper describes the model architecture and processes using mathematical equations and block diagrams (Figure 3), but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code is available at https://github.com/thuml/Transolver.
Open Datasets Yes As presented in Table 1, our experiments span point cloud, structured mesh, regular grid and unstructured mesh in both 2D and 3D space. Elasticity, Plasticity, Airfoil, Pipe, Navier-Stokes and Darcy were proposed by FNO (Li et al., 2021) and geo-FNO (Li et al., 2022), which have been widely followed. Besides, we also experiment with car and airfoil designs tasks. Shape-Net Car (Umetani & Bickel, 2018) is to estimate the surface pressure and surrounding air velocity given vehicle shapes. Airf RANS (Bonnet et al., 2022) contains high-fidelity simulation data for Reynolds-Averaged Navier Stokes equations on airfoils from the National Advisory Committee for Aeronautics.
Dataset Splits No Table 7 and Section B.1 describe training and test sets (e.g., '1000 samples... for training and another 200 samples are used for test'), but there is no explicit mention of a separate validation split.
Hardware Specification Yes All the experiments are conducted on one NVIDIA A100 GPU and repeated three times.
Software Dependencies No The paper does not explicitly provide specific version numbers for software dependencies such as programming languages, libraries (e.g., PyTorch, TensorFlow), or CUDA.
Experiment Setup Yes Table 8 provides detailed 'TRAINING CONFIGURATION' including 'LOSS', 'EPOCHS', 'INITIAL LR', 'OPTIMIZER', 'BATCH SIZE'. It also specifies 'LAYERS L' as 8, 'CHANNELS C' as 128 or 256, and 'SLICES M' as 32 or 64. For example, 'INITIAL LR 10e-3' and 'BATCH SIZE 4' for Elasticity.