Heterogeneous Graph Learning for Multi-Modal Medical Data Analysis

Authors: Sein Kim, Namkyeong Lee, Junseok Lee, Dongmin Hyun, Chanyoung Park

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various real-world datasets demonstrate the superiority and practicality of Het Med. The source code for Het Med is available at https://github.com/Sein-Kim/ Multimodal-Medical.
Researcher Affiliation Academia 1 Dept. of Industrial and Systems Engineering, KAIST, Daejeon, Republic of Korea 2 Institute of Artifical Intelligence, POSTECH, Pohang, Republic of Korea 3 Graduate School of Artificial Intelligence, KAIST, Daejeon, Republic of Korea
Pseudocode No The paper describes its methods using prose and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code for Het Med is available at https://github.com/Sein-Kim/ Multimodal-Medical.
Open Datasets Yes To evaluate our proposed Het Med, we conduct experiments on five multi-modal medical datasets. Specifically, we use three brain related datasets, and two breast-related datasets. Note that since 3D images can be readily converted to 2D images through slicing, we also report the performance on 3D image datasets when they are converted to 2D. The detailed statistics are summarized in Table 1 and further details on each dataset are described in Appendix 7.1.
Dataset Splits Yes For end-to-end framework evaluation, we split the data into train/validation/test data of 60/10/30% following previous work (Holste et al. 2021). For pretraining framework evaluation, we use the whole data to pretrain the image encoder network following previous work (Azizi et al. 2021), and split the data into train/validation/test data of 60/10/30% to train the final image classifier.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software like 'Simple ITK' and models like 'Res Net-18' and 'single layer GCN' but does not specify version numbers for these or other key software dependencies.
Experiment Setup Yes For hyperparameters, we tune them in certain ranges as follows: learning rate η in {0.0001, 0.0005, 0.001}, supervised loss parameter β in {0.01, 0.1, 1.0}, node embedding dimension size d in {64, 128, 256}, the number of clusters |R| in {3, 4, 5}, and the graph construction threshold θ in {0.01, 0.75, 0.9} for each relationship. Further details are described in Appendix 7.4.