Pre-training of Graph Augmented Transformers for Medication Recommendation

Authors: Junyuan Shang, Tengfei Ma, Cao Xiao, Jimeng Sun

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We used EHR data from MIMIC-III [Johnson et al., 2016] and conducted all our experiments on a cohort where patients have more than one visit. ... Table. 3 compares the performance on the medication recommendation task.
Researcher Affiliation Collaboration Junyuan Shang1,3 , Tengfei Ma2 , Cao Xiao1 and Jimeng Sun3 1Analytics Center of Excellence, IQVIA, Cambridge, MA, USA 2IBM Research AI, Yorktown Heights, NY, USA 3Georgia Institute of Technology, Atlanta, GA, USA
Pseudocode No The paper describes the method using mathematical equations and descriptive text, but it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code Yes 1https://github.com/jshang123/G-Bert
Open Datasets Yes We used EHR data from MIMIC-III [Johnson et al., 2016]
Dataset Splits Yes We randomly divide the dataset into training, validation and testing set in a 0.6 : 0.2 : 0.2 ratio.
Hardware Specification Yes All methods are implemented in Py Torch [Paszke et al., 2017] and trained on an Ubuntu 16.04 with 8GB memory and Nvidia 1080 GPU.
Software Dependencies No The paper mentions 'Py Torch' and 'Ubuntu 16.04' but does not specify version numbers for PyTorch or any other key software libraries.
Experiment Setup Yes For G-BERT, the hyperparameters are adjusted on evaluation set: (1) GAT part: input embedding dimension as 75, number of attention heads as 4; (2) BERT part: hidden dimension as 300, dimension of position-wise feed-forward networks as 300, 2 hidden layers with 4 attention heads for each layer. ... Training is done through Adam [Kingma and Ba, 2014] at learning rate 5e-4.