SE(3) Equivariant Graph Neural Networks with Complete Local Frames

Authors: Weitao Du, He Zhang, Yuanqi Du, Qi Meng, Wei Chen, Nanning Zheng, Bin Shao, Tie-Yan Liu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on two tasks: Newton mechanics modeling and equilibrium molecule conformation generation. Extensive experimental results demonstrate that our model achieves the best or competitive performance in two types of datasets.
Researcher Affiliation Collaboration 1Chinese Academy of Sciences, China 2Xi an Jiaotong University, China 3George Mason University, USA 4Microsoft Research, USA.
Pseudocode Yes Algorithm 1 Clof Net
Open Source Code No The paper cites external open-source projects used in their work (e.g., EGNN, NRI) but does not provide a link or statement about releasing the source code for their own proposed method (Clof Net).
Open Datasets Yes We evaluate the proposed model on the GEOM-QM9 and GEOM-Drugs datasets (Axelrod & Gomez-Bombarelli, 2020) as well as the ISO17 dataset (Sch utt et al., 2017).
Dataset Splits Yes Following EGNN, for each system, we sample 3,000 trajectories for training, 2,000 for validation and 2,000 for test.
Hardware Specification Yes The forward time is measured by averaging over multiple batches on an Nvidia Tesla V100 GPU
Software Dependencies No The paper mentions software like Pytorch (Paszke et al., 2019), Adam optimizer (Kingma & Ba, 2014), and Dopri5 solver (Dormand & Prince, 1980), but it provides these as citations rather than specific version numbers for software dependencies necessary for reproduction.
Experiment Setup Yes All baselines consist of 4 layers with hidden dimension 64 and are trained with Adam W optimizer (Loshchilov & Hutter, 2017) via a Mean Squared Error (MSE) loss. The learning rate and training epochs are tuned independently for each model... The Clof Net is equipped with 4 Graph Transformer blocks and the hidden dimensions are set to 288. All models are trained with Adam optimizer via the score matching loss function (See Appendix A.7.3, 49) for 400 epochs.