SEGNO: Generalizing Equivariant Graph Neural Networks with Physical Inductive Biases
Authors: Yang Liu, Jiashun Cheng, Haihong Zhao, Tingyang Xu, Peilin Zhao, Fugee Tsung, Jia Li, Yu Rong
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on complex dynamical systems including molecular dynamics and motion capture demonstrate that our model yields a significant improvement over the state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | 1The Hong Kong University of Science and Technology (Guangzhou) 2The Hong Kong University of Science and Technology 3Tencent AI Lab |
| Pseudocode | No | The paper describes the SEGNO framework using mathematical equations and prose, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using and building upon code from previous works (e.g., 'We use the same N-body charged system code4 with previous work (Satorras et al., 2021; Brandstetter et al., 2021).'), but does not provide a statement or link for the open-sourcing of their own SEGNO implementation. |
| Open Datasets | Yes | We first evaluate it on two simulated N-body systems, namely Charged particles and Gravity particles, which are driven by electromagnetic (Kipf et al., 2018) and gravitational forces (Brandstetter et al., 2021) between each pair of particles, respectively. Subsequently, we compare our model with state-of-the-art models in two challenging datasets: (1) MD22 (Chmiela et al., 2023)... (2) CMU motion capture (CMU, 2003)... |
| Dataset Splits | Yes | The number of training, validation, and testing sets are 3000, 2000, and 2000, respectively. (B.1) / The number of training, validation, and testing sets are 500, 2000, and 2000, respectively. (B.2) / We adopt a random split strategy introduced by Huang et al. (2022) where train/validation/test data contains 200/600/600 frame pairs. (B.3) |
| Hardware Specification | Yes | We run experiments on NVIDIA RTX A6000 GPU. (5.2) / We evaluate the running time of each model on N-body systems with Telsa T4 GPU... (C.6) |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'EGNN as the GNN backbone', but does not specify version numbers for any software libraries, frameworks, or programming languages. |
| Experiment Setup | Yes | We empirically find that the following hyperparameters generally work well, and use them across most experimental evaluations: Adam optimizer with learning rate 0.001, the number of epochs 500, hidden dim 64, weight decay 1 10 12, and layer number 4. We set the iteration time of SEGNO to 8. |