Equivariant Graph Mechanics Networks with Constraints
Authors: Wenbing Huang, Jiaqi Han, Yu Rong, Tingyang Xu, Fuchun Sun, Junzhou Huang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments support the advantages of GMN compared to the state-of-the-art GNNs in terms of prediction accuracy, constraint satisfaction and data efficiency on the simulated systems consisting of particles, sticks and hinges, as well as two real-world datasets for molecular dynamics prediction and human motion capture. |
| Researcher Affiliation | Collaboration | Wenbing Huang 1, Jiaqi Han 2 , Yu Rong B3, Tingyang Xu3, Fuchun Sun B2, Junzhou Huang4 1 Institute for AI Industry Research (AIR), Tsinghua University 2 Beijing National Research Center for Information Science and Technology (BNRist), Department of Computer Science and Technology, Tsinghua University 3 Tencent AI Lab 4 Department of Computer Science and Engineering, University of Texas at Arlington |
| Pseudocode | Yes | Alg. 1 has summarized the updates for all objects. |
| Open Source Code | Yes | Our code is available at: https://github.com/hanjq17/GMN. |
| Open Datasets | Yes | We inherit the 3D extension of Fuchs et al. (2020) based on the N-body simulation introduced in Kipf et al. (2018). ...We adopt MD17 (Chmiela et al., 2017) which involves the trajectories of eight molecules generated via molecular dynamics simulation. ...We use the motion data from the CMU Motion Capture Database (CMU, 2003). |
| Dataset Splits | Yes | The validation and testing sets contain 2000 trajectories. ...We randomly split the dataset into train/validation/test sets containing 500/2000/2000 frame pairs respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'Multi-Layer Perceptron (MLP)' but does not specify any software versions for libraries (e.g., PyTorch version) or programming languages. |
| Experiment Setup | Yes | For GNN, RF, EGNN, EGNNReg, and GMN, we empirically find that the following hyper-parameters generally work well, and use them across all experimental evaluations: batch size 200, Adam optimizer with learning rate 0.0005, hidden dim 64, and weight decay 1e-10. All models are evaluated with four layers. ...batch size 100, learning rate 0.001, hidden dim 64, representation degrees 3 and weight decay 1e-8. Models are trained for 600 epochs on the simulation dataset, and 500 epochs on the real-world datasets. |