Equivariance via Minimal Frame Averaging for More Symmetries and Efficiency

Authors: Yuchao Lin, Jacob Helwig, Shurui Gui, Shuiwang Ji

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results demonstrate the efficiency and effectiveness of encoding symmetries via MFA across a diverse range of tasks, including n-body simulation, top tagging in collider physics, and relaxed energy prediction. Our code is available at https://github.com/divelab/MFA. (Abstract) ... Empirically, we demonstrate the advantages of our method on a variety of tasks spanning diverse groups, including n-body simulation, isomorphic graph separation, classification of hadronically decaying top quarks (top tagging), relaxed energy prediction on OC20, and prediction of 5-dimenional convex hull volumes. (Introduction) ... 7. Experiments (Section Title)
Researcher Affiliation Academia Yuchao Lin 1 Jacob Helwig 1 Shurui Gui 1 Shuiwang Ji 1 1Department of Computer Science and Engineering, Texas A&M University, Texas, USA. Correspondence to: Shuiwang Ji <sji@tamu.edu>.
Pseudocode Yes Algorithm 1 Generalized Gram-Schmidt Orthogonalization (Appendix E.1)
Open Source Code Yes Our code is available at https://github.com/divelab/MFA.
Open Datasets Yes The task for the top tagging data (Kasieczka et al., 2019b)... (Section 7.3) ... We consider the task of predicting the relaxed energy of an adsorbate interacting with catalyst conditioned on the initial atomic structure from the Open Catalyst (OC20) dataset (Chanussot et al., 2021). (Section 7.4) ... The Sn-invariant Weisfeiler-Lehman (WL) datasets considered by Puny et al. (2021) tasks models with separating and classifying graphs. (Section 7.5) ... The convex hull dataset from Ruhe et al. (2023) tasks models with computing the volume of the convex hull generated by sets of 5 dimensional points. (Section 7.6) ... In the E(3)-equivariant n-body problem from Kipf et al. (2018); Satorras et al. (2021)... (Section 7.2)
Dataset Splits Yes The convex hull dataset, comprising 16,384 samples for each of the training, validation, and test sets, is generated following Ruhe et al. (2023).
Hardware Specification Yes Training and evaluation are conducted on a single NVIDIA GeForce RTX 2080 Ti GPU. (Appendix I.2) ... The model is trained and evaluated on an NVIDIA A100 GPU. (Appendix I.3) ... Training and evaluation are conducted on 1 NVIDIA A100 GPU. (Appendix I.4)
Software Dependencies No No specific version numbers for software dependencies (e.g., Python, PyTorch, CUDA) were found.
Experiment Setup Yes Training parameters include a batch size of 100, a learning rate of 1e-3, and a weight decay of 5e-6 over 10,000 epochs. (Appendix I.2) ... The Adam optimizer is employed with a batch size of 256 and an initial learning rate of 2e-3, adjusted by a cosine annealing scheduler over 12 epochs. (Appendix I.3) ... Both MINKGNN and MFA-MINKGNN are optimized using the Adam optimizer, with a batch size of 64, a learning rate of 5e-4, a weight decay of 1e-2, and a cosine annealing scheduler over 100 epochs. (Appendix I.4)