Geometric Transformer for End-to-End Molecule Properties Prediction

Authors: Yoni Choukroun, Lior Wolf

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate the representational power of the end to end model... The proposed framework outperforms the state-of-the-art methods... In Table 1 (left) we report the mean absolute error (MAE) on all QM9 targets and compare it to the state-of-the-art models... Our ablation study compares typical impact of the different self-attention modules.
Researcher Affiliation Academia Yoni Choukroun and Lior Wolf School of Computer Science, Tel Aviv University choukroun.yoni@gmail.com, wolf@cs.tau.ac.il
Pseudocode No The paper provides architectural diagrams (Figure 1) but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'The experimental setup including the architecture details and the training procedure is provided in the Appendix1' with a footnote linking to the arXiv PDF of the paper itself (https://arxiv.org/pdf/2110.13721.pdf), but does not provide a link to open-source code for the methodology.
Open Datasets Yes The popular QM9 dataset [Ramakrishnan et al., 2014] contains 130, 831 molecules... We use MD17 [Chmiela et al., 2017] to test model performance in molecular dynamics simulations.
Dataset Splits Yes Following previous work, we split the dataset to 110, 000, 10, 000 and 10, 831 molecules for the training, validation and testing sets respectively.
Hardware Specification No The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU types, or cloud computing instances.
Software Dependencies No The paper mentions implementation in PyTorch and use of the Adam optimizer, but does not provide specific version numbers for software dependencies.
Experiment Setup Yes The proposed framework is implemented in PyTorch and Adam optimizer is used with learning rate of 0.0001, and 0.9, 0.999 exponential decay rates... The training ran for 250 epochs. For MD17, the training ran for 50 epochs. We use a batch size of 128.