Learning rigid dynamics with face interaction graph networks
Authors: Kelsey R Allen, Yulia Rubanova, Tatiana Lopez-Guevara, William F Whitney, Alvaro Sanchez-Gonzalez, Peter Battaglia, Tobias Pfaff
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Compared to learned node-and particle-based methods, FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes. We conduct a series of ablations and experiments which showcase how face-to-face collision representations dramatically improve rigid body dynamics prediction. |
| Researcher Affiliation | Industry | Kelsey R. Allen , Yulia Rubanova , Tatiana Lopez-Guevara, William Whitney, Alvaro Sanchez-Gonzalez, Peter Battaglia, Tobias Pfaff Deep Mind, London, UK |
| Pseudocode | No | The paper describes the model architecture and algorithms using prose and mathematical equations, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain an explicit statement that the source code for FIGNet is being released or a direct link to its repository. |
| Open Datasets | Yes | The Kubric dataset (Greff et al., 2022) consists of the rigid-body simulations of diverse 3D objects tossed simultaneously onto a large plane. For these experiments, we use the MIT Pushing Dataset ((Yu et al., 2016); Figure 7a)... |
| Dataset Splits | Yes | For each of MOVi-A, MOVi-B and Movi C datasets, we use 1500 trajectories for training, 100 for validation and 100 for testing. |
| Hardware Specification | Yes | All models are trained to 1M steps with a batch size of 128 across 8 TPU devices. |
| Software Dependencies | No | The paper mentions using components like Adam optimizer and Layer Norm, but does not provide specific version numbers for software libraries or frameworks like PyTorch or TensorFlow. |
| Experiment Setup | Yes | All models are trained to 1M steps with a batch size of 128 across 8 TPU devices. We use Adam optimizer, and an an exponential learning rate decay from 1e-3 to 1e-4. To stabilize long rollouts, we add random-walk noise to the positions during training (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2021; Allen et al., 2022). |