A Generative Model for Molecular Distance Geometry

Authors: Gregor Simm, Jose Miguel Hernandez-Lobato

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In a new benchmark for molecular conformation generation, we show experimentally that our generative model achieves state-of-the-art accuracy.
Researcher Affiliation Academia 1Department of Engineering, University of Cambridge, Cambridge, UK. Correspondence to: Gregor N. C. Simm <gncs2@cam.ac.uk>.
Pseudocode No The paper describes the model in detail using equations (e.g., (1)-(6)) and a graphical illustration (Figure 3), but it does not include a formal pseudocode block or an explicitly labeled algorithm section.
Open Source Code Yes The model is available online https://github.com/ gncs/graphdg
Open Datasets Yes The CONF17 benchmark is the first benchmark for molecular conformation sampling.6 It is based on the ISO17 dataset (Sch utt et al., 2017a) which consists of conformations of various molecules with the atomic composition C7H10O2 drawn from the QM9 dataset (Ramakrishnan et al., 2014).
Dataset Splits Yes The optimal values for the hyperparameters for the network dimensions, number of message passes, batch size, and learning rate of the Adam optimizer (Kingma & Ba, 2014) were manually tuned by maximizing the validation performance (ELBO) and are reported in the Appendix.
Hardware Specification Yes All simulations were carried out on a computer equipped with an i7-3820 CPU and a Ge Force GTX 1080 Ti GPU.
Software Dependencies No The paper mentions 'Adam (Kingma & Ba, 2014)' as the optimizer used and compares against 'RDKIT (Riniker & Landrum, 2015)' and 'DL4CHEM (Mansimov et al., 2019)' methods, but it does not specify version numbers for these or any other software dependencies (e.g., deep learning frameworks) used in the experiments.
Experiment Setup No The paper states that 'The optimal values for the hyperparameters for the network dimensions, number of message passes, batch size, and learning rate of the Adam optimizer (...) were manually tuned (...) and are reported in the Appendix', but these specific details are not present in the provided main text.