Customized Subgraph Selection and Encoding for Drug-drug Interaction Prediction
Authors: Haotong Du, Quanming Yao, Juzheng Zhang, Yang Liu, Zhen Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness and superiority of the proposed method, with the discovered subgraphs and encoding functions highlighting the model s adaptability. |
| Researcher Affiliation | Academia | Haotong Du1 Quanming Yao2 Juzheng Zhang2 Yang Liu1 Zhen Wang1 1Northwestern Polytechnical University 2Tsinghua University |
| Pseudocode | Yes | Algorithm 1: The search algorithm of CSSE-DDI. |
| Open Source Code | Yes | Our code is available at https://github.com/LARS-research/CSSE-DDI. |
| Open Datasets | Yes | Experiments are conducted on two public benchmark DDI datasets: Drug Bank [42] and TWOSIDES [43]. Detailed descriptions of these datasets are presented in Appendix B.1. |
| Dataset Splits | Yes | Let Dtra and Dval denote the training and validation sets, respectively. |
| Hardware Specification | Yes | All the experiments are implemented in Python with the Py Torch framework [64] and run on a server machine with single NVIDIA RTX 3090 GPU with 24GB memory and 64GB of RAM. |
| Software Dependencies | Yes | All the experiments are implemented in Python with the Py Torch framework [64] |
| Experiment Setup | Yes | For CSSE-DDI, we set the epoch to 400 for training supernet and set the epoch to 400 for training sub-supernets. We set the the temperature parameter as 0.05. Repeat 5 times with different seeds, we can get 5 candidates. The searched candidates are finetuned individually with the hyper-parameters. In the stage of fine-tuning, we use the Reduce LROn Plateau scheduler to adjust the learning rate dynamically. Each candidate has 10 hyper steps. In each hyper step, a set of hyperparameter will be sampled from Table 8. |