Retrosynthesis Prediction with Conditional Graph Logic Network

Authors: Hanjun Dai, Chengtao Li, Connor Coley, Bo Dai, Le Song

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiment Dataset: We mainly evaluate our method on a benchmark dataset named USPTO-50k, which contains 50k reactions of 10 different types in the US patent literature. We use exactly the same training/validation/test splits as Coley et al. [8], which contain 80%/10%/10% of the total 50k reactions. Table 1 contains the detailed information about the benchmark. [...] We present the top-k exact match accuracy in Table 3, where k ranges from {1, 3, 5, 10, 20, 50}.
Researcher Affiliation Collaboration Hanjun Dai , Chengtao Li2, Connor W. Coley , Bo Dai , Le Song Google Research, Brain Team, {hadai, bodai}@google.com 2Galixir Inc., chengtao.li@galixir.com Massachusetts Institute of Technology, ccoley@mit.edu Georgia Institute of Technology, Ant Financial, lsong@cc.gatech.edu
Pseudocode Yes Algorithm 1 Importance Sampling for br ( )
Open Source Code Yes Our code is released at https://github.com/Hanjun Dai/GLN.
Open Datasets Yes We mainly evaluate our method on a benchmark dataset named USPTO-50k, which contains 50k reactions of 10 different types in the US patent literature. We use exactly the same training/validation/test splits as Coley et al. [8], which contain 80%/10%/10% of the total 50k reactions.
Dataset Splits Yes We use exactly the same training/validation/test splits as Coley et al. [8], which contain 80%/10%/10% of the total 50k reactions.
Hardware Specification Yes It takes about 12 hours to train with a single GTX 1080Ti GPU.
Software Dependencies No The paper mentions software like 'RDKit' and 'rdchiral' ('We use rdchiral [31] to extract the retrosynthesis templates...'), but does not specify their version numbers or the versions of other dependencies like PyTorch, TensorFlow, or Python.
Experiment Setup Yes We train our model for up to 150k updates with batch size of 64. [...] We tune embedding sizes in {128, 256}, GNN layers {3, 4, 5} and GNN aggregation in {max, mean, sum} using validation set.