Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel
Authors: Xuan Li, Zhanke Zhou, Jiangchao Yao, Yu Rong, Lu Zhang, Bo Han
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on four long-range graph benchmarks, covering graph-level and link-level tasks on molecular graphs. We achieve up to a 27.32% and 38.27% improvement in the 2D and 3D scenarios, respectively. |
| Researcher Affiliation | Collaboration | Xuan Li1 Zhanke Zhou1 Jiangchao Yao2,3 Yu Rong4 Lu Zhang1 Bo Han1 1Hong Kong Baptist University 2CMIC, Shanghai Jiao Tong University 3Shanghai AI Laboratory 4Tencent AI Lab |
| Pseudocode | Yes | Algorithm 1 Message propagation with neural atoms. |
| Open Source Code | Yes | Code and datasets are publicly available in https://github.com/tmlr-group/Neural Atom. |
| Open Datasets | Yes | Code and datasets are publicly available in https://github.com/tmlr-group/Neural Atom. We employ the molecular datasets (Peptides-Func, Petides Struct, PCQM-Contact) that exhibit LRI from Long Range Graph Benchmarks (LRGB) (Vijay et al., 2022b). |
| Dataset Splits | No | The paper mentions 'validation performance evaluation' and shows 'Training and validation loss curves visualizations' in Appendix G. However, it does not provide explicit percentages or sample counts for the training, validation, or test splits. It mentions the 'inductive setting' in Appendix A.2, implying a split, but without quantitative details. |
| Hardware Specification | Yes | All the experiments are run on an NVIDIA RTX 3090 GPU with AMD Ryzen 3960X CPU. |
| Software Dependencies | No | The paper mentions using the 'Graph GPS framework' and 'Py G framework' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | Appendix A.1, titled 'HYPERPARAMETERS', explicitly states: 'We summarize the common hyperparameters that are shared across different models on the LRGB datasets, along with the model-specific hyperparameters, shown as Tab. 5 to Tab. 8.' These tables contain specific values for dropout, PE dim, batch size, learning rate, # epochs, # GNN layers, hidden dim, # heads, proportion, and # neural atoms. |