Temporal Belief Memory: Imputing Missing Data during RNN Training

Authors: Yeo Jin Kim, Min Chi

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our TBM approach with realworld electronic health records (EHRs) consisting of 52,919 visits and 4,224,567 events on a task of early prediction of septic shock. We compare TBM against multiple baselines including both domain experts rules and the state-of-the-art missing data handling approach using both RNN and long shortterm memory. The experimental results show that TBM outperforms all the competitive baseline approaches for the septic shock early prediction task.
Researcher Affiliation Academia Yeo Jin Kim and Min Chi North Carolina State University ykim32@ncsu.edu, mchi@ncsu.edu
Pseudocode No The paper includes mathematical equations and a diagram (Figure 1) illustrating the model architecture, but it does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper states, 'Our dataset constitutes anonymized clinical multivariate time series data, extracted from the EHR system at Christiana Care Health System from July, 2013 to December, 2015.' This indicates a proprietary dataset, and no information is provided for public access, such as a link, DOI, or a citation to a well-known public dataset.
Dataset Splits Yes In the learning process, we split data into 80% for training, 10% for validation, and 10% for test, and conduct 5-fold cross validation.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions using RNNs, LSTM, and the Adam optimizer. However, it does not specify any software libraries (e.g., TensorFlow, PyTorch, scikit-learn) or their version numbers, which are necessary for reproducible software dependencies.
Experiment Setup Yes For both RNN and LSTM, we use one hidden layer with 30 hidden neurons and 32 maximum sequence length. We use the Adam optimizer [Kingma and Ba, 2015] with the batch size 30, and adopt early stopping with 7 patience after minimum 10 epochs.