Towards understanding retrosynthesis by energy-based models

Authors: Ruoxi Sun, Hanjun Dai, Li Li, Steven Kearnes, Bo Dai

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform rigorous evaluations by running tens of experiments on different model designs. Revealing the performance to the community contributes to the development of retrosynthesis models.
Researcher Affiliation Industry 1Google Cloud AI 2Google Brain 3Google Research {ruoxis, hadai, leeley, kearnes, bodai}@google.com
Pseudocode Yes Algorithm 1 EBM framework
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate our method on a benchmark dataset named USPTO-50k, which includes 50k reactions falling into ten reaction types from the US patent literature. The datasets are split into train/validation/test with percentage of 80%/10%/10%.
Dataset Splits Yes The datasets are split into train/validation/test with percentage of 80%/10%/10%.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing instances used for experiments.
Software Dependencies No The paper mentions "RDKit [29]" but does not specify a version number for this or any other software dependency, which is required for reproducibility.
Experiment Setup No The paper describes the dataset, splits, evaluation metric, and data augmentation procedures. However, it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other specific system-level training configurations.