MNN: Multimodal Attentional Neural Networks for Diagnosis Prediction

Authors: Zhi Qiao, Xian Wu, Shen Ge, Wei Fan

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real world EHR datasets demonstrate the advantages of MNN in term of accuracy.
Researcher Affiliation Industry Zhi Qiao , Xian Wu , Shen Ge and Wei Fan Tencent Medical AI Lab {xiaobuqiao, kevinxwu, shenge, davidwfan}@tencent.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We use a publicly available multimodal EHR data, MIMIC-III released on Physio Net [Goldberger et al., 2000].
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into training, validation, and test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions 'We implement all the models with Tensorflow 1.4'.
Software Dependencies Yes We implement all the models with Tensorflow 1.4 [Abadi et al., 2015].
Experiment Setup Yes In all experiments, the learning rate is set to be 0.001, embedding size l = 64 and hidden state size r = 128 for our methods. We also use regularization (l2 norm with the coefficient 0.001), drop-out strategies (with the drop-out rate 0.5) and batch size 20 for all methods.