Self-Supervised Adversarial Distribution Regularization for Medication Recommendation
Authors: Yanda Wang, Weitong Chen, Dechang PI, Lin Yue, Sen Wang, Miao Xu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | SARMR outperforms all baseline methods in the experiment on a real-world clinical dataset.In this section, SARMR is compared with different baseline methods on the real-world clinical dataset MIMIC-III v1.4 [Johnson et al., 2016]. |
| Researcher Affiliation | Academia | 1Nanjing University of Aeronautics and Astronautics, Nanjing, 211106 China 2The University of Queensland, Brisbane QLD 4072 Australia 3Northeast Normal University, Changchun, 130024 China |
| Pseudocode | No | The paper describes the model's steps and uses equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The model is implemented with Py Torch and trained on a NVIDIA TITAN Xp GPU, and more information about source code could be found at Github 1. 1https://github.com/yanda-wang/SARMR |
| Open Datasets | Yes | In this section, SARMR is compared with different baseline methods on the real-world clinical dataset MIMIC-III v1.4 [Johnson et al., 2016]. |
| Dataset Splits | No | The paper describes the dataset used ('MIMIC-III v1.4') and patient selection criteria, but it does not explicitly provide details about specific train, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation folds). |
| Hardware Specification | Yes | The model is implemented with Py Torch and trained on a NVIDIA TITAN Xp GPU |
| Software Dependencies | No | The paper mentions 'Py Torch' as the implementation framework, but it does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | No | The paper describes the overall training strategy and loss functions, but it does not explicitly provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training configurations in the main text. |