Context-Aware Health Event Prediction via Transition Functions on Dynamic Disease Graphs

Authors: Chang Lu, Tian Han, Yue Ning4567-4574

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two real-world EHR datasets show that the proposed model outperforms state of the art in predicting health events.
Researcher Affiliation Academia Chang Lu, Tian Han, Yue Ning Stevens Institute of Technology {clu13, tian.han, yue.ning}@stevens.edu
Pseudocode No The paper describes the model and its components using equations and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code of Chet can be found at https://github.com/Lu Chang-CS/Chet/.
Open Datasets Yes We use MIMIC-III (Johnson et al. 2016) and MIMIC-IV (Johnson et al. 2021) to validate the predictive power of Chet.
Dataset Splits Yes We further randomly split the two datasets based on patients into training/validation/test sets, which contain 6,000/ 493/1,000 patients for MIMIC-III and 8,000/1,000/1,000 for MIMIC-IV, respectively.
Hardware Specification Yes All programs are implemented using Python 3.8.6 and Py Torch 1.7.1 with CUDA 11.1 on a machine with Intel i9-9900K CPU, 64GB memory, and Geforce RTX 2080 Ti GPU.
Software Dependencies Yes All programs are implemented using Python 3.8.6 and Py Torch 1.7.1 with CUDA 11.1
Experiment Setup Yes The hyper-parameters as well as activation functions are tuned on the validation set. Specifically, we set the threshold δ as 0.01. The embedding size s for M, N is 48, s for R is 32. The attention size a is also 32. The hidden units p of M-GRU and GRU are 256 on MIMICIII and 350 on MIMIC-IV for the diagnosis prediction task. For the heart failure prediction task, p is 100 on MIMICIII and 150 on MIMIC-IV. When training Chet, we use 200 epochs and the Adam (Kingma and Ba 2015) optimizer. The learning rate is set as 0.01.