Molformer: Motif-Based Transformer on 3D Heterogeneous Molecular Graphs

Authors: Fang Wu, Dragomir Radev, Stan Z. Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate Molformer across a broad range of domains, including quantum chemistry, physiology, and biophysics. Extensive experiments show that Molformer outperforms or achieves the comparable performance of several state-of-the-art baselines.
Researcher Affiliation Academia Fang Wu1,3, Dragomir Radev2, Stan Z. Li1* 1 School of Engineering, Westlake University 2 Department of Computer Science, Yale University 3 Institute of AI Industry Research, Tsinghua University
Pseudocode Yes Algorithm 1: Attentive Farthest Point Sampling
Open Source Code Yes The code is available at https://github.com/smiles724/Molformer.
Open Datasets Yes For QM9, we use the exact train/validation/test split as Townshend et al. (2020). For PDBbind, 90% of the data is used for training and the rest is divided equally between validation and test like Chen et al. (2019). For others, we adopt the scaffold splitting method with a ratio of 8:1:1 for train/validation/test as Rong et al. (2020).
Dataset Splits Yes For QM9, we use the exact train/validation/test split as Townshend et al. (2020). For PDBbind, 90% of the data is used for training and the rest is divided equally between validation and test like Chen et al. (2019). For others, we adopt the scaffold splitting method with a ratio of 8:1:1 for train/validation/test as Rong et al. (2020).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) required to replicate the experiment.
Experiment Setup No The paper states, "More implementing details can be found in Appendix," implying that specific experimental setup details like hyperparameters are not present in the main text.