Grounding Graph Network Simulators using Physical Sensor Observations

Authors: Jonas Linkerhägner, Niklas Freymuth, Paul Maria Scheikl, Franziska Mathis-Ullrich, Gerhard Neumann

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate our approach on a suite of prediction tasks for mesh-based interactions between soft and rigid bodies. Our method results in utilization of additional point cloud information to accurately predict stable simulations where existing Graph Network Simulators fail.
Researcher Affiliation Academia 1Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany 2Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
Pseudocode No The paper includes a detailed description of the Message Passing Network and Graph Network Simulator, along with a schematic diagram (Figure 7), but it does not present formal pseudocode or algorithm blocks.
Open Source Code Yes Code and data can be found under https://github.com/jlinki/GGNS.
Open Datasets Yes Code and data can be found under https://github.com/jlinki/GGNS.
Dataset Splits Yes We use a total of 675/135/135 trajectories for our training, validation and test sets. Each trajectory consist of T = 50 timesteps.
Hardware Specification No The paper mentions running experiments 'on a single GPU' (Table 4) but does not provide specific details such as the GPU model (e.g., NVIDIA A100, RTX 3090) or other hardware specifications like CPU or memory.
Software Dependencies No The paper mentions software like SOFA and Open3D but does not specify their version numbers, which is necessary for reproducible software dependencies.
Experiment Setup Yes We train all models on all tasks using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 5 10 4 and a batch size of 32, using early stopping on a held-out validation set to save the best model iteration for each setting. The models use a Leaky Re LU activation function, five message passing blocks with 1-layer MLPs and a latent dimension of 128 for node and edge updates. We use a mean aggregation for the edge features and a training noise of 0.01.