Equivariant Graph Neural Operator for Modeling 3D Dynamics

Authors: Minkai Xu, Jiaqi Han, Aaron Lou, Jean Kossaifi, Arvind Ramanathan, Kamyar Azizzadenesheli, Jure Leskovec, Stefano Ermon, Anima Anandkumar

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments in multiple domains, including particle simulations, human motion capture, and molecular dynamics, demonstrate the significantly superior performance of EGNO against existing methods, thanks to the equivariant temporal modeling.
Researcher Affiliation Collaboration 1Stanford University 2NVIDIA 3Argonne National Laboratory 4California Institute of Technology.
Pseudocode No The paper describes the architecture and processes in text and diagrams (Figure 1, Figure 4) but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Minkai Xu/egno.
Open Datasets Yes We adopt the 3D N-body simulation dataset (Satorras et al., 2021)... We further benchmark our model on CMU Motion Capture dataset (CMU, 2003)... We adopt MD17 (Chmiela et al., 2017) dataset... We use the preprocessed version (Han et al., 2022b) of the Adk equilibrium trajectory dataset (Seyler & Beckstein, 2017) integrated in the MDAnalysis (Richard J. Gowers et al., 2016) toolkit.
Dataset Splits Yes N-body simulation: 3000/2000/2000 trajectories for training/validation/testing. CMU Motion Capture: Subject #35 contains 200/600/600 trajectories for training/validation/testing, while Subject #9 contains 200/240/240. MD17: randomly partitions each trajectory into 500/2000/2000 subsets for training/validation/testing. Protein: divides the entire trajectory into a training set with 2481 sub-trajectories, a validation set with 827, and a testing set with 878 trajectories.
Hardware Specification No The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU types, or cloud instance specifications) used for running the experiments.
Software Dependencies No The paper mentions 'PyTorch implementation' and 'Adam optimizer (Kingma & Ba, 2014)' but does not provide specific version numbers for these software components.
Experiment Setup Yes We provide detailed hyperparameters of our EGNO in Table 8. Specifically, batch is for batch size, lr for learning rate, wd for weight decay, layer for the number of layers, hidden for hidden dimension, timestep for the number of time steps, time emb for the dimension of time embedding, num mode for the number of modes (frequencies). We adopt Adam optimizer (Kingma & Ba, 2014) and all models are trained towards convergence with an earlystopping of 50 epochs on the validation loss.