GNN-Retro: Retrosynthetic Planning with Graph Neural Networks

Authors: Peng Han, Peilin Zhao, Chan Lu, Junzhou Huang, Jiaxiang Wu, Shuo Shang, Bin Yao, Xiangliang Zhang4014-4021

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on the USPTO dataset show that our framework could outperform the state-of-the-art methods with a large margin under the same settings.
Researcher Affiliation Collaboration 1 University of Electronic Science and Technology of China 2 King Abdullah University of Science and Technology 3 Aalborg University 4 Tencent AI Lab 5 Shanghai Jiao Tong University 6 University of Notre Dame
Pseudocode No The paper describes mathematical formulations and processes but does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code No The paper does not provide any statement regarding the release of source code for the described methodology or a link to a code repository.
Open Datasets Yes The public reaction dataset United States Patent Office (USPTO) is used in our method with the same preprocessing as (Chen et al. 2020).
Dataset Splits Yes There are about 1.3 million reactions after the deduplication and filtration, which are randomly separated into training/validation/testing sets with proportion 80%/10%/10% respectively.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for experiments are provided.
Software Dependencies No The paper mentions 'Adam' as an optimizer but does not provide specific version numbers for any programming languages, libraries, or frameworks used (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes For every target molecule, we at most run the one-step reactions 500 times, which is the same as (Chen et al. 2020). The embedding of the molecule is fixed as 128. We set the weight λ of partial ordering loss as 1. The slack variable ϵ is set as 7. For the threshold τ, we select it from the range [0 : 0.1 : 1.0]. The weight α is also selected from the range [0 : 0.1 : 1.0]. and Adam (Kingma and Ba 2015) is utilized as the optimizer to minimize the loss L with learning rate 0.001.