Reinforced Molecular Optimization with Neighborhood-Controlled Grammars
Authors: Chencheng Xu, Qiao Liu, Minlie Huang, Tao Jiang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In a series of experiments, we demonstrate that our approach achieves state-of-the-art performance in a diverse range of molecular optimization tasks and exhibits significant superiority in optimizing molecular properties with a limited number of property evaluations. |
| Researcher Affiliation | Academia | 1BNRIST, Tsinghua University, Beijing 100084, China 2Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China 3Department of Automation, Tsinghua University, Beijing 100084, China 4Department of Computer Science and Engineering, UCR, CA 92521, USA |
| Pseudocode | No | The algorithm to parse molecular graphs and infer the production rules is shown in Appendix B. |
| Open Source Code | Yes | Link to code and datasets: https://github.com/Zoesgithub/MNCE-RL |
| Open Datasets | Yes | The ZINC250k molecule dataset [13], Guaca Mol package [3] and 2,337 drug molecules from [31] are used in our experiments. |
| Dataset Splits | No | The paper mentions using datasets for experiments (e.g., ZINC250k, Guaca Mol) and discusses training models, but it does not provide specific details on train/validation/test dataset splits, such as percentages, sample counts, or explicit splitting methodologies. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The validity of generated molecules is checked by RDKit [20]. |
| Experiment Setup | Yes | To generate molecules with desired properties, the widely used RL technique, Proximal Policy Optimization [29] (PPO), is adopted to train the model. The objective function of PPO is LCLIP (θ) = ˆEt h min(rt(θ) ˆAt, clip(rt(θ), 1 ϵ, 1 + ϵ) ˆAt) i , where ϵ is a hyperparameter... Details of model training and optimizations of hyperparameters are shown in Appendix F. |