GPMO: Gradient Perturbation-Based Contrastive Learning for Molecule Optimization
Authors: Xixi Yang, Li Fu, Yafeng Deng, Yuansheng Liu, Dongsheng Cao, Xiangxiang Zeng
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical studies show that GPMO outperforms the state-of-the-art molecule optimization methods. Furthermore, the negative and positive perturbations improve the robustness of GPMO. |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Electronic Engineering, Hunan University, Changsha, China 2Carbon Silicon AI Technology Co., Ltd, Hangzhou, Zhejiang, China 3Xiangya School of Pharmaceutical Sciences, Central South University, Changsha, Hunan, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper, nor does it include a specific repository link or an explicit code release statement. |
| Open Datasets | Yes | GPMO is trained in the pre-training stage using the dataset from MOSES [Polykovskiy et al., 2020]. In the molecule optimization stage, GPMO utilizes a dataset from previous work [He et al., 2021], and the statistical information is presented in Table 1. |
| Dataset Splits | Yes | Val is the abbreviation of validation. The train, validation, and test set are divided with a random split. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We set the weight of negative perturbation λ = [1, 3, 5] and the weight of positive perturbation µ = [1, 3, 5]. We observe that the variance in performance across different gradient perturbations combinations was no more than 0.14, indicating the robustness of GPMO. |