Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics
Authors: Liming Wu, Zhichao Hou, Jirui Yuan, Yu Rong, Wenbing Huang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our model on three real world datasets: 1) molecular-level: MD17 [5], 2) protein-level: Ad K equilibrium trajectory dataset [34] and 3) macro-level: CMU Motion Capture Databse [7]. and Table 1 shows the average MSE of all models on 8 molecules. |
| Researcher Affiliation | Collaboration | Liming Wu1,2 , Zhichao Hou3 , Jirui Yuan4, Yu Rong5, Wenbing Huang1,2 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China 3Department of Computer Science, North Carolina State University 4Institute for AI Industry Research (AIR), Tsinghua University 5Tencent AI Lab |
| Pseudocode | Yes | Algorithm 1 Equivariant Spatio-Temporal Attentive Graph Networks (ESTAG) |
| Open Source Code | Yes | The codes of ESTAG are available at: https://github.com/Manlio Wu/ESTAG. |
| Open Datasets | Yes | We evaluate our model on three real world datasets: 1) molecular-level: MD17 [5], 2) protein-level: Ad K equilibrium trajectory dataset [34] and 3) macro-level: CMU Motion Capture Databse [7]. |
| Dataset Splits | Yes | The number of training, validation and testing sets are 500, 2000 and 2000, respectively. |
| Hardware Specification | No | No specific hardware details (like GPU models, CPU types, or memory) used for running the experiments are mentioned. |
| Software Dependencies | No | The paper mentions software like "Py Mol toolkit", "MDAnalysis toolkit", and "Chimera X software" but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We use the following hyper-parameters across all experimental evaluations: batch size 100, the number of epochs 500, weight decay 1 10 12, the number of layers 4 (we consider one ESTAG includes two layers, i.e. ESM and ETM), hidden dim 16, Adam optimizer with learning rate 5 10 3. |