Communicative Subgraph Representation Learning for Multi-Relational Inductive Drug-Gene Interaction Prediction
Authors: Jiahua Rao, Shuangjia Zheng, Sijie Mai, Yuedong Yang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate our method, we compiled two new benchmark datasets from Drug Bank and DGIdb. The comprehensive experiments on the two datasets showed that our method outperformed state-of-the-art baselines in the transductive scenarios and achieved superior performance in the inductive ones. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Sun Yat-sen University 2School of Electronic and Information Technology, Sun Yat-sen University 3Key Laboratory of Machine Intelligence and Advanced Computing, Sun Yat-sen University 4Galixir Technologies Ltd, Beijing |
| Pseudocode | No | The paper describes its methods using mathematical equations and textual explanations, but it does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | Yes | We implemented Co SMIG using pytorch geometric [Fey and Lenssen, 2019], which is available at https://github.com/biomed-AI/Co SMIG. |
| Open Datasets | Yes | To evaluate the effectiveness of Co SMIG, we compiled the multi-relational datasets from DGIdb[Cotto et al., 2018] and Drug Bank [Wishart et al., 2018], respectively (Table 1). |
| Dataset Splits | No | We tuned model hyperparameters based on cross validation results on Drug Bank and used them across all datasets. |
| Hardware Specification | Yes | The training process lasted 80 epochs on a Nvidia Ge Force RTX 3090 GPU. |
| Software Dependencies | No | We implemented Co SMIG using pytorch geometric [Fey and Lenssen, 2019] |
| Experiment Setup | Yes | The hop number h was set to 3. The depth of model was set to 4. For each subgraph, we randomly dropped out its adjacency matrix entries with a probability of 0.1 during the training. The training process lasted 80 epochs on a Nvidia Ge Force RTX 3090 GPU. |