Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?
Authors: Jiacheng Cen, Anyi Li, Ning Lin, Yuxiang Ren, Zihe Wang, Wenbing Huang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate that HEGNN not only aligns with our theoretical analyses on a toy dataset consisting of symmetric structures, but also shows substantial improvements on other complicated datasets without obvious symmetry, including N-body and MD17. |
| Researcher Affiliation | Collaboration | 1 Gaoling School of Artificial Intelligence, Renmin University of China 2 Beijing Key Laboratory of Big Data Management and Analysis Methods 3 2012 Laboratories, Huawei Technologies, Shanghai |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/GLAD-RUC/HEGNN. |
| Open Datasets | Yes | N-body system [43] is a dataset generated from simulations. ... MD17 [44] dataset contains trajectory data for eight molecules generated through molecular dynamics simulations. |
| Dataset Splits | Yes | We use 5000 samples for training, 2000 for validation, and 2000 for testing. ... splitting the dataset into 500/2000/2000 frame pairs for training, validation and testing, respectively. |
| Hardware Specification | Yes | All experiments are run on a single NVIDIA A100-80G GPU. |
| Software Dependencies | No | The paper mentions using specific libraries like e3nn [39] and scipy [83] but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper states dataset splits (e.g., "5000 samples for training, 2000 for validation, and 2000 for testing") and loss function ("Mean Squared Error (MSE)"). It also mentions "Following the settings in [35]" for toy datasets. However, it does not explicitly detail concrete hyperparameters like learning rate, batch size, number of epochs, or optimizer settings in the main text. |