Heterogeneous Causal Metapath Graph Neural Network for Gene-Microbe-Disease Association Prediction

Authors: Kexin Zhang, Feng Huang, Luotao Liu, Zhankun Xiong, Hongyu Zhang, Yuan Quan, Wen Zhang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments show that HCMGNN effectively predicts GMD associations and addresses association sparsity issue by enhancing the graph s semantics and structure.
Researcher Affiliation Academia 1 College of Informatics, Huazhong Agricultural University, Wuhan 430070, China 2 Hubei Key Laboratory of Agricultural Bioinformatics, Huazhong Agricultural University, Wuhan 430070, China
Pseudocode No No pseudocode or clearly labeled algorithm blocks are present in the paper.
Open Source Code Yes Source code and dataset of HCMGNN are available online at https://github.com/zkxinxin/HCMGNN.
Open Datasets Yes We construct a dataset of gene-microbe-disease associations using several public databases. The gene-microbe associations are collected from GIMICA [Tang et al., 2021] and gut MGene [Cheng et al., 2022b]. The microbe-disease associations are derived from Micro Pheno DB [Yao et al., 2020], Gut MDdisorder [Cheng et al., 2020] and Peryton [Skoufos et al., 2021]. And the gene-disease associations are obtained from the Dis Ge NET [Pi nero et al., 2016].
Dataset Splits Yes We randomly split the dataset into 90% cross-validation (CV) set and 10% independent test set. Then we perform the 5fold CV on the CV set to train the model and optimize the hyper-parameters, and evaluate model performance on the independent test set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names with versions).
Experiment Setup Yes For training our model, we set the balance coefficient of the loss function to 0.7, employ the Adam optimizer with a learning rate set to 0.005 to optimize the model, and adopt early stopping mechanism with a patience of 50 to terminate the training early (the hyperparameter sensitivity analysis are provided in Appendix Section 4).