Learning Data-Driven Drug-Target-Disease Interaction via Neural Tensor Network

Authors: Huiyuan Chen, Jing Li

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the effectiveness of the Neur TN model.
Researcher Affiliation Academia Huiyuan Chen and Jing Li Department of Computer and Data Sciences, Case Western Reserve University hxc501@case.edu, jingli@cwru.edu
Pseudocode No The paper describes its model architecture and components in detail, but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide an explicit statement about releasing code or a link to a source-code repository.
Open Datasets Yes We obtain data from three public databases [Chen and Li, 2019; Wang et al., 2018]: CTD2, Drug Bank3, and Uni Prot4.
Dataset Splits Yes We randomly split the dataset into 80% training, 10% validation, and 10% test sets.
Hardware Specification No The paper states that models are built upon PyTorch with Adam optimizer, but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions PyTorch and RDKit, but does not specify version numbers for these software dependencies, for example, 'Our models are built upon Py Torch6 with Adam optimizer' and 'SMILES strings can be converted to molecular graphs using RDKit tool5'.
Experiment Setup Yes For Neur TN, the embedding size r in Eq. (5) is searched within [16, 32, 64, 128]. For MLP and CTN, we both employ three hidden layers with dropout ratio ρ = 0.3 and each layer sequentially decreases the half size of inputs. Our models are built upon Py Torch6 with Adam optimizer [Kingma and Ba, 2015]. We search the batch size and the learning rate within {128, 256, 512, 1024} and {0.001, 0.005, 0.01, 0.05, 0.1}, respectively. We use grid-based search to find the best parameter settings. We tune model parameters using validation set and terminate training if the performance does not improve for 100 epochs.