Learning Physical Dynamics with Subequivariant Graph Neural Networks

Authors: Jiaqi Han, Wenbing Huang, Hengbo Ma, Jiachen Li, Josh Tenenbaum, Chuang Gan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct evaluations on Physion [4] and Rigid Fall [21]. Experimental results show that our model is capable of yielding more accurate dynamics prediction, is highly data-efficient, and has strong generalization compared with the state-of-the-art learning-based differentiable physical simulators.
Researcher Affiliation Collaboration Jiaqi Han Tsinghua University Wenbing Huang Gaoling School of Artificial Intelligence, Renmin University of China Beijing Key Laboratory of Big Data Management and Analysis Methods Hengbo Ma University of California, Berkeley Jiachen Li Stanford University Joshua B. Tenenbaum MIT BCS, CBMM, CSAIL Chuang Gan UMass Amherst MIT-IBM Watson AI Lab
Pseudocode No The paper describes the methodology using equations and textual descriptions, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code and videos are available at our project page: https://hanjq17.github.io/SGNN/.
Open Datasets Yes We conduct evaluations on Physion [4] and Rigid Fall [21].
Dataset Splits Yes The networks are trained with an Adam optimizer, using an initial learning rate 0.0001 and an early-stopping of 10 epochs on the validation loss.
Hardware Specification No The paper does not explicitly state the specific hardware used for running the experiments, such as GPU models or CPU specifications.
Software Dependencies No The paper mentions using an 'Adam optimizer' and 'MLPs' but does not provide specific version numbers for software dependencies such as deep learning frameworks or libraries.
Experiment Setup Yes In detail, all MLPs are initialized with 3 projection layers and a hidden dimension of 200. The networks are trained with an Adam optimizer, using an initial learning rate 0.0001 and an early-stopping of 10 epochs on the validation loss. We use 4 iterations in each message passing of our model.