Clinical Risk Prediction with Temporal Probabilistic Asymmetric Multi-Task Learning

Authors: A. Tuan Nguyen, Hyewon Jeong, Eunho Yang, Sung Ju Hwang9081-9091

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally validate our Temporal Probabilistic Asymmetric Multi-Task Learning (TP-AMTL) model on four clinical risk prediction datasets against multiple baselines, which our model significantly outperforms without any sign of negative transfer. The results show that our model obtains significant improvements over strong multi-task learning baselines with no negative transfer on any of the tasks (Table 2).
Researcher Affiliation Collaboration 1 School of Computing, Korea Advanced Institute of Science and Technology, 2 AI Graduate School, Korea Advanced Institute of Science and Technology 3 Aitrics, 4 Department of Computer Science, University of Oxford
Pseudocode No The paper describes its proposed methods in detail using mathematical equations and textual explanations but does not provide a formal pseudocode block or an algorithm label.
Open Source Code No The paper does not contain any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We compile a dataset out of the MIMIC III dataset (Johnson et al. 2016)... Physio Net (Citi and Barbieri 2012)... We use a variant (Ude M 2014) of the MNIST dataset (Le Cun and Cortes 2010)...
Dataset Splits Yes After pre-processing, approximately 2000 data points with a sufficient amount of features were selected, which was randomly split to approximately 1000/500/500 for training/validation/test. ... We use a random split of 2800/400/800 for training/validation/test.
Hardware Specification No The paper does not provide any specific details regarding the hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper refers to various deep learning models and architectures (e.g., LSTM, Transformer, RETAIN), but it does not specify version numbers for any software dependencies or programming languages used.
Experiment Setup No The paper states: 'Please see the supplementary file for descriptions of the baselines, experimental details, and the hyper-parameters used.' This indicates that detailed experimental setup information, including hyperparameters, is deferred to supplementary materials rather than being explicitly provided in the main text.