ComENet: Towards Complete and Efficient Message Passing for 3D Molecular Graphs

Authors: Limei Wang, Yi Liu, Yuchao Lin, Haoran Liu, Shuiwang Ji

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the capability and efficiency of Com ENet, especially on real-world datasets that are large in both numbers and sizes of graphs. Our code is publicly available as part of the DIG library (https://github.com/divelab/DIG). For example, Table 2 and Table 3 show comparisons of performance and efficiency on various datasets like OC20 and Molecule3D, which are empirical evaluations.
Researcher Affiliation Academia Limei Wang Texas A&M University College Station, TX 77843 limei@tamu.edu Yi Liu Florida State University Tallahassee, FL 32306 liuy@cs.fsu.edu Yuchao Lin Texas A&M University College Station, TX 77843 kruskallin@tamu.edu Haoran Liu Texas A&M University College Station, TX 77843 liuhr99@tamu.edu Shuiwang Ji Texas A&M University College Station, TX 77843 sji@tamu.edu
Pseudocode Yes All the computing procedures for Come Net are described in detail in Algorithm 1 of Appendix A.1.
Open Source Code Yes Our code is publicly available as part of the DIG library (https://github.com/divelab/DIG).
Open Datasets Yes We examine the power and efficiency of Com ENet on two large-sacle datasets including Open Catalyst 2020 (OC20) [12] and Molecule3D [56], and the mostly commonly used datastet QM9 [40].
Dataset Splits Yes The statistics of three datasets are provided in Table 1. Dataset...Split Ratio OC20...70:15:15 Molecule3D...6:2:2 QM9...84:8:8
Hardware Specification Yes For example, Sphere Net needs 5 hours per epoch while Com ENet only requires 20 minutes using the same computing infrastructure (NVIDIA RTX A6000 48GB). All the models are trained using the same computing infrastructure (Nvidia Ge Force RTX 2080 Ti 11GB).
Software Dependencies No The paper mentions using 'Py Torch Geometric library' and 'Adam optimizer' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup No Experimental setup and search space for all models are provided in Appendix A.6. The main text explicitly defers these details to the appendix, implying they are not in the main text itself.