Knowledge-Empowered Dynamic Graph Network for Irregularly Sampled Medical Time Series

Authors: Yicheng Luo, Zhen Liu, Linghao Wang, Binquan Wu, Junhao Zheng, Qianli Ma

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on four healthcare datasets demonstrate that KEDGN significantly outperforms existing methods.
Researcher Affiliation Academia Yicheng Luo, Zhen Liu*, Linghao Wang, Junhao Zheng, Binquan Wu, Qianli Ma School of Computer Science and Engineering, South China University of Technology, Guangzhou, China {csluoyicheng2001, cszhenliu, cskyun_ng}@mail.scut.edu.cn, {linghaowang6, junhaozheng47}@outlook.com, qianlima@scut.edu.cn
Pseudocode Yes The pseudo-code for KEDGN is presented in Appendix A (Algorithm 1).
Open Source Code Yes Our code is available at https://github.com/qianlima-lab/KEDGN.
Open Datasets Yes We conduct experiments on four widely used irregular medical time series datasets, namely P19 [43], Physionet [44], MIMIC-III [45] and P12 [46] where Physionet is a reduced version of P12 considered by prior work [6]. P19 ... It is available at https://physionet.org/content/challenge-2019/1.0.0/. P12 ... It is available at https://physionet.org/content/challenge-2012/1.0.0/. MIMIC-III ... It is available at https://physionet.org/content/mimiciii/1.4/. Physionet ... It is available at https://physionet.org/content/challenge-2012/.
Dataset Splits Yes For the data pre-processing of MIMIC-III, we follow the method described in [48] and divide the dataset into three parts for training, validation, and testing with the ratio of 70%,15%,15%. For the remaining three datasets, we adhered to [9] s approaches, and the ratio of training, validation, and testing set is 8:1:1.
Hardware Specification Yes We conduct a analysis of the time and space overhead on the Physionet dataset, with a batch size of 128, and utilizing Nvidia 1080Ti GPU infrastructure.
Software Dependencies No The paper mentions 'Adam [50] optimizer' and 'BERT [38]' but does not provide specific version numbers for these or other software libraries/frameworks used.
Experiment Setup Yes We adopt the Adam [50] optimizer, and the number of training epochs is set as 10. Due to differences in dataset sizes, the learning rate is set as 0.001 for Physionet and P12 and 0.005 for MIMIC-III and P19. ... our model has a total of 5 hyperparameters: dimension of query vectors q, dimension of variables node embeddings n, proportion of density score α, dimension of variables hidden state h, and dimension of structured encoding representations k. For all datasets, h and k are set to be equal, and we search them over the range {8, 12, 16}. Additionally, we search the dimension of query vectors q in {5, 7, 9}, the dimension of variables node embedding n in {7, 9, 11} and the proportion of density score α in {1.0, 2.0, 3.0}.