HAMLET: Graph Transformer Neural Operator for Partial Differential Equations

Authors: Andrey Bryutkin, Jiahao Huang, Zhongying Deng, Guang Yang, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate, through extensive experiments, that our framework is capable of outperforming current techniques for PDEs. and We extensively evaluate HAMLET through experiments on various graphs, which enhances its inference characteristics and reduces overfitting. In addition, we compare our proposed technique with existing approaches and validate it on multiple datasets.
Researcher Affiliation Academia 1Department of Mathematics, MIT, USA 2Bioengineering Department and Imperial-X, National Heart and Lung Institute & Cardiovascular Research Centre, Imperial College London, UK. 3Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK. Correspondence to: Andrey Bryutkin <bryutkin@mit.edu>, Jiahao Huang <j.huang21@imperial.ac.uk>.
Pseudocode No The paper does not contain explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide explicit statements or links indicating that the source code for the HAMLET methodology is open-source or publicly available.
Open Datasets Yes We mainly utilised the datasets from PDEBench (Takamoto et al., 2022) a wider range public benchmark for PDE-based simulation tasks, selecting Darcy Flow, Shallow Water, and Diffusion Reaction showcasing the stationary and time-dependent problems on a uniform grid. We also utilised the Airfoil dataset from (Pfaff et al., 2021), which models aerodynamics around an airfoil wing s cross-section, for experiments on irregular grids.
Dataset Splits No The paper specifies 'training/testing' splits (e.g., '9000/1000 samples for training/testing sets' for Darcy Flow, '900/100 for training/testing' for Shallow Water and Diffusion Reaction), but it does not explicitly provide details for a 'validation' dataset split.
Hardware Specification Yes All experiments were performed on a single NVIDIA A100 GPU with 80GB of memory, running under the same conditions for a fair comparison. and Inference time is measured as an average of 50 runs, with a batch size of 1, on an NVIDIA RTX3090.
Software Dependencies No The paper does not explicitly state specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, or CUDA).
Experiment Setup Yes More details about the model architecture, training setting, and hyper-parameter can be found in Appendix D. and Table 6. The implementation details of HAMLET which lists Learning Rate (LR) Parameter, Optimisation Parameter, Encoder (Input) Architecture, Encoder (Query) Architecture, Decoder Architecture, along with specific values for hidden dimensions, number of blocks, number of heads, learning rates, optimizers, etc. For example, Initial LR 0.0001, Optimiser Adam, Hidden Dim (Enc I) 96, #Blocks Graph Transformer 10.