VITA: ‘Carefully Chosen and Weighted Less’ Is Better in Medication Recommendation

Authors: Taeri Kim, Jiho Heo, Hongil Kim, Kijung Shin, Sang-Wook Kim

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments using real-world datasets, we demonstrate the superiority of VITA (spec., up to 5.67% higher accuracy, in terms of Jaccard, than the best competitor) and the effectiveness of its two core ideas.
Researcher Affiliation Academia 1Department of Computer Science, Hanyang University, South Korea 2Department of Artificial Intelligence, Hanyang University, South Korea 3Kim Jaechul Graduate School of AI & School of Electrical Engineering, KAIST, South Korea
Pseudocode No The paper describes its methods and components using textual explanations and mathematical equations, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code is available at https://github.com/jhheo0123/VITA.
Open Datasets Yes We used the MIMIC-III dataset (Johnson et al. 2016), widely used in medication recommendation studies (Shang et al. 2019b; Yang et al. 2021a,b; Wu et al. 2022b), and the MIMIC-IV dataset (Johnson et al. 2023), which is a follow-up dataset to MIMIC-III dataset.
Dataset Splits Yes We randomly split the patients in each dataset into training (4/6), validation (1/6), and test (1/6) sets as in (Shang et al. 2019b; Yang et al. 2021b; Wu et al. 2022b).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU specifications, memory).
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, TensorFlow/PyTorch versions, or other libraries).
Experiment Setup Yes We randomly split the patients in each dataset into training (4/6), validation (1/6), and test (1/6) sets as in (Shang et al. 2019b; Yang et al. 2021b; Wu et al. 2022b).