Constraint-based graph network simulator
Authors: Yulia Rubanova, Alvaro Sanchez-Gonzalez, Tobias Pfaff, Peter Battaglia
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested C-GNS on several physical simulation domains: ropes, bouncing balls and irregular rigids (Mu Jo Co engine, Todorov et al. (2012)) and splashing fluids (Flex engine, Macklin et al. (2014a)). We found that C-GNS produced more accurate rollouts than the state-of-the-art Graph Net Simulator (Sanchez-Gonzalez et al., 2020) with a comparable number of parameters, and than Neural Projections (Yang et al., 2020). |
| Researcher Affiliation | Industry | Deep Mind, London, UK. |
| Pseudocode | Yes | Algorithm 1 Constraint-based graph network simulator |
| Open Source Code | No | No explicit statement or link for open-source code for the methodology is provided. The link provided is for videos/rollouts, not the source code. |
| Open Datasets | Yes | We generated the data for our ROPE, BOUNCING BALLS and BOUNCING RIGIDS datasets using the Mu Jo Co physics simulator (Todorov et al., 2012). We also tested our model on BOXBATH dataset with 1024 particles from (Li et al., 2019)... |
| Dataset Splits | Yes | Our Mu Jo Co datasets contain 8000/100/100 train/validation/test trajectories of 160 time points each. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, cloud instances, or detailed specifications) are provided in the paper. |
| Software Dependencies | No | No specific version numbers are provided for the mentioned software components (e.g., 'Mu Jo Co physics simulator', 'JAX', 'Sci Py'). |
| Experiment Setup | Yes | During training we use a fixed number of GD iterations (5). ... We use gradient descent with the fixed step size λ = 0.001. ... We train the models for 1M steps on ROPE, BOUNCING BALLS and BOUNCING RIGIDS. We used the Adam optimizer with an initial learning rate of 0.0001, and a decay factor of 0.7 applied with a schedule at steps (1e5, 2e5, 4e5, 8e5). We use a batch size of 64. ... For all GNNs, we used a residual connection for the nodes and edges on each message-passing layer. |