ConCare: Personalized Clinical Feature Embedding via Capturing the Healthcare Context
Authors: Liantao Ma, Chaohe Zhang, Yasha Wang, Wenjie Ruan, Jiangtao Wang, Wen Tang, Xinyu Ma, Xin Gao, Junyi Gao833-840
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world EMR datasets demonstrate the effectiveness of Con Care. The medical findings extracted by Con Care are also empirically confirmed by human experts and medical literature. |
| Researcher Affiliation | Academia | 1Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China 2National Engineering Research Center of Software Engineering, Peking University, Beijing, China 3School of Electronics Engineering and Computer Science, Peking University, Beijing, China 4School of Computing and Communications, Lancaster University, UK 5Division of Nephrology, Peking University Third Hospital, Beijing, China |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | We release our code and case studies at Git Hub https://github.com/Accountable-Machine-Intelligence/Con Care |
| Open Datasets | Yes | MIMIC-III Dataset. We use ICU data from the publicly available Medical Information Mart for Intensive Care (MIMIC-III) database (Johnson et al. 2016). |
| Dataset Splits | Yes | We fix a test set of 15% of patients and divide the rest of the dataset into the training set and validation set with a proportion of 0.85 : 0.15. The training set is further split into 10 folds to perform the 10-fold cross-validation. |
| Hardware Specification | Yes | The training was done in a machine equipped with CPU: Intel Xeon E5-2630, 256GB RAM, and GPU: Nvidia Titan V by using Pytorch 1.1.0. |
| Software Dependencies | Yes | The training was done in a machine equipped with CPU: Intel Xeon E5-2630, 256GB RAM, and GPU: Nvidia Titan V by using Pytorch 1.1.0. |
| Experiment Setup | Yes | For training the model, we used Adam (Kingma and Ba 2014) with the mini-batch of 256 patients and the learning rate is set to 1e-3. |