Graph Neural PDE Solvers with Conservation and Similarity-Equivariance
Authors: Masanobu Horie, Naoto Mitsume
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our findings from experiments demonstrate that the model s inclusion of physical laws significantly enhances its generalizability, i.e., no significant accuracy degradation for unseen spatial domains while other models degrade. |
| Researcher Affiliation | Collaboration | 1RICOS Co. Ltd., Tokyo, Japan 2Graduate School of Science and Technology, University of Tsukuba, Ibaraki, Japan. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/ yellowshippo/fluxgnn-icml2024. |
| Open Datasets | No | For our datasets, we generated 100 trajectories for training, 10 for validation, and 10 for testing. These were derived using the exact solution of the equation, with random variations in uniform velocity u from 0.0 to 0.2, and in the amplitude and phase of the sinusoidal initial condition. |
| Dataset Splits | Yes | For our datasets, we generated 100 trajectories for training, 10 for validation, and 10 for testing. |
| Hardware Specification | Yes | It was trained on a CPU (Intel Xeon CPU E5-2695 v2 @ 2.40 GHz) for 3 hours... All machine learning models were trained on GPUs (NVIDIA A100 80GB PCIe) over a period of three days. |
| Software Dependencies | Yes | We have implemented all our models using Py Torch 1.9.1 (Paszke et al., 2019). |
| Experiment Setup | Yes | The model was trained on a CPU (Intel Xeon CPU E5-2695 v2 @ 2.40 GHz) for 3 hours, using MSE loss and an Adam optimizer (Kingma & Ba, 2014)... The hyperparameter employed for the study is detailed in Table 5. |