Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance

Authors: Jinwoo Kim, Dat Nguyen, Ayhan Suleymanzade, Hyeokjun An, Seunghoon Hong

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical tests show competitive results against tailored equivariant architectures, suggesting the potential for learning equivariant functions for diverse groups using a non-equivariant universal base architecture. We further show evidence of enhanced learning in symmetric modalities, like graphs, when pretrained from non-symmetric modalities, like vision.
Researcher Affiliation Academia Jinwoo Kim Tien Dat Nguyen Ayhan Suleymanzade Hyeokjun An Seunghoon Hong KAIST
Pseudocode No The paper describes methods and processes in text and mathematical equations, but it does not include any clearly labeled pseudocode blocks or algorithms.
Open Source Code Yes Code is available at https://github.com/jw9730/lps.
Open Datasets Yes For empirical demonstration, we adopt the experimental setup of [41] and use the n-body dataset [84, 31] where the task is predicting the position of n = 5 charged particles after certain time given their initial position and velocity in R3 (Sn E(3) equivariant).
Dataset Splits No The paper mentions datasets by name (e.g., 'GRAPH8c', 'EXP', 'n-body', 'PATTERN', 'Peptides-func', 'Peptides-struct', 'PCQM-Contact') and their general characteristics (Table 5 and 6), and uses validation for early stopping ('early stopping based on validation loss'), but it does not explicitly provide the exact train/validation/test split percentages or sample counts for all datasets.
Hardware Specification Yes which takes around 30 minutes on a single RTX 3090 GPU with 24GB using Py Torch [72].
Software Dependencies Yes which takes around 30 minutes on a single RTX 3090 GPU with 24GB using Py Torch [72].
Experiment Setup Yes we train our models with binary cross-entropy loss using Adam optimizer [46] with batch size 100 and learning rate 1e-3 for 2,000 epochs, which takes around 30 minutes on a single RTX 3090 GPU with 24GB using Py Torch [72].