Fast, Expressive $\mathrm{SE}(n)$ Equivariant Networks through Weight-Sharing in Position-Orientation Space

Authors: Erik J Bekkers, Sharvaree Vadgama, Rob Hesselink, Putri A Van der Linden, David W. Romero

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We support this claim with state-of-the-art results in accuracy and speed on five different benchmarks in 2D and 3D, including interatomic potential energy prediction, trajectory forecasting in N-body systems, and generating molecules via equivariant diffusion models." and "5 EXPERIMENTS In this section, we evaluate our approach. Comprehensive implementation details, including architecture specifications and optimization techniques, can be found in Appx. C and D.
Researcher Affiliation Collaboration 1 University of Amsterdam 2 Vrije Universiteit Amsterdam 3 NVIDIA Research
Pseudocode No The paper describes its methods using mathematical equations and textual explanations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code available at https://github.com/ebekkers/ponita
Open Datasets Yes r MD17 (Christensen & Von Lilienfeld, 2020) is a dataset comprising molecular dynamics trajectories of ten small molecules." and "We train on the QM9 dataset (Ramakrishnan R., 2014), a standard dataset containing molecular properties, one-hot representations of atom types (H, C, N, O, F) and 3D coordinates for 130K molecules with up to 9 heavy atoms." and "Finally, we benchmark PΘNITA on the charged N-body particle system experiment proposed in Kipf et al. (2018).
Dataset Splits No The paper details training settings and hyperparameters for each benchmark (e.g., 'trained for 5000 epochs, with a batch size of 5', 'learning rate of 5e 4'), but it does not specify explicit dataset splits (e.g., 80/10/10 percentages or sample counts for training, validation, and test sets).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It only mentions training times (e.g., 'training time per epoch of 20.7').
Software Dependencies No The paper mentions software libraries such as 'Pytorch (Paszke et al., 2019)', 'Pytorch-Geometric (Fey & Lenssen, 2019)', and 'Wand B (Biewald, 2020)', but it does not specify their exact version numbers.
Experiment Setup Yes Training settings. The r MD17 results were obtained with PΘNITA and PNITA with L = 5 layers, C = 128 hidden features. The polynomial degree was set to 3. The models were trained for 5000 epochs, with a batch size of 5. We used the Adam optimizer Kingma & Ba (2014), with a learning rate of 5e 4, and with a Cosine Anealing learning rate schedule with a warmup period of 50 epochs.