Self-Improved Retrosynthetic Planning
Authors: Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that our scheme significantly improves the success rate of solving the retrosynthetic problem from 86.84% to 96.32% while maintaining the performance of DNN for predicting valid reactions. |
| Researcher Affiliation | Academia | 1Korea Advanced Institute of Science and Technology (KAIST) 2Mohamed bin Zayed University of Artificial Intelligence (MBZUAI). |
| Pseudocode | Yes | We provide an illustration and a detailed description of our framework in Figure 2 and Algorithm 1, respectively. Algorithm 1 Self-Improved Retrosynthetic Planning |
| Open Source Code | No | The paper thanks Binghong Chen for providing the dataset and source implementation of RETRO* and provides a link to their GitHub (https://github.com/binghong-ml/retro_star). This is code for a baseline used, not the authors' own implementation of their proposed self-improved framework. |
| Open Datasets | Yes | For the target molecules Dtarget, we choose synthesizable molecules from I and reactions in the United States Patent Office (USPTO) database (Lowe, 2012). For the reaction dataset Dreaction, we use reactions extracted from USPTO, following training/validation/test splits by Chen et al. (2020b). |
| Dataset Splits | Yes | For the reaction dataset Dreaction, we use reactions extracted from USPTO, following training/validation/test splits by Chen et al. (2020b). |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like RDChiral, Morgan fingerprint, and Adam optimizer but does not specify their version numbers, which are required for reproducibility. |
| Experiment Setup | Yes | The forward reaction model pf is trained with a learning rate of 0.001 for 100 epochs. ... the backward reaction model pb is trained with a learning rate of 0.0001 for 20 epochs. Adam optimizer (Kingma & Ba, 2014) is used with a mini-batch of size 1024 for training all the models. We iterate our overall procedure three times. ... We set both thresholds ϵ, ϵaug as 0.8. |