Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Multilevel Attention Network with Semi-supervised Domain Adaptation for Drug-Target Prediction

Authors: Zhousan Xie, Shikui Tu, Lei Xu

AAAI 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four datasets show that Mlan DTI achieves state-of-the-art performances over other methods under intra-domain settings and outperforms all other approaches under cross-domain settings.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China 2Guangdong Institute of Intelligence Science and Technology, Zhuhai, Guangdong 519031, China EMAIL
Pseudocode No The paper describes procedures and architecture using text and diagrams, but it does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The source code is available at https://github.com/CMACH508/Mlan DTI.
Open Datasets Yes Datasets We evaluated our model on the human dataset, Caenorhabditis elegans dataset (Tsubaki, Tomii, and Sese 2019), bindingdb dataset (Liu et al. 2007), and Biosnap dataset (Huang et al. 2021).
Dataset Splits Yes For the intra-domain evaluation, we randomly split the dataset into training, validation, and test sets with a ratio of 8:1:1 in smaller human and C.elegans datasets, and 7:1:2 in larger Binding DB and Biosnap datasets.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU, GPU models, memory).
Software Dependencies No While 'Py Torch' and 'Adam optimizer' are mentioned, specific version numbers for these software dependencies are not provided in the paper.
Experiment Setup Yes Our proposed method in implemented in Py Torch, utilizing the Adam optimizer with an initial learning rate of 0.001. Detailed hyperparameter settings are provided in the appendix.