Active Retrosynthetic Planning Aware of Route Quality
Authors: Luotian Yuan, Yemin Yu, Ying Wei, Yongwei Wang, Zhihua Wang, Fei Wu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our framework to different existing approaches on both the benchmark and an expert dataset and demonstrate that it outperforms the existing state-of-the-art approach by 6.2% in route quality while reducing the query cost by 12.8%. [...] 4 EXPERIMENTS |
| Researcher Affiliation | Academia | Luotian Yuan1 , Yemin Yu2,4 , Ying Wei3 , Yongwei Wang1,4 , Zhihua Wang4, Fei Wu1,4 1Zhejiang University, 2City University of Hong Kong, 3Nanyang Technological University 4Shanghai Institute for Advanced Study of Zhejiang University |
| Pseudocode | Yes | Algorithm 1: Training algorithm |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing its code or a link to a code repository for its own implementation. |
| Open Datasets | Yes | We use two test sets to evaluate our methods. The first one is a widely used USPTO-50k benchmark dataset that has 178 hard molecules raised in Chen et al. (2020). [...] Initially, the method in Guo et al. (2020) is employed to pre-train a model utilizing the USPTO-MIT dataset, followed by the fine-tuning of the model in reactions derived from the high-quality, expert-annotated dataset. |
| Dataset Splits | Yes | We partition the expert dataset as 0.8/0.1/0.1 into train/valid/test sets. |
| Hardware Specification | No | The paper does not explicitly describe the hardware specifications (e.g., GPU models, CPU types, or cloud instance details) used for its experiments. |
| Software Dependencies | No | The paper mentions algorithms (e.g., TD3) and models (e.g., MLP, Morgan fingerprint) but does not provide specific version numbers for software dependencies or libraries used. |
| Experiment Setup | Yes | As for the hyper-parameters, we set the maximum route depth as 6 and λ in Eq. 1 as 4. [...] Hyperparameters Values Encoder layers 4 Decoder layers 4 Encoder embedding dimension 2048 Encoder FFN embedding dimension 2048 Encoder attention heads 8 Decoder embedding dimension 2048 Decoder FFN embedding dimension 2048 Decoder attention heads 8 Optimizer Adam Learning rate 1e-4 Weight decay 0.0001 N epochs 12 Clip norm 0.25 Dropout rate 0.1 |