Personalized Donor-Recipient Matching for Organ Transplantation
Authors: Jinsung Yoon, Ahmed Alaa, Martin Cadeiras, Mihaela van der Schaar
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on the UNOS heart transplant dataset show the superiority of the prognostic value of Confident Match to other competing benchmarks; Confident Match can provide predictions of success with 95% accuracy for 5,489 patients of a total population of 9,620 patients, which corresponds to 410 more patients than the most competitive benchmark algorithm (Deep Boost). |
| Researcher Affiliation | Academia | 1 Department of Electrical Engineering, University of California, Los Angeles (UCLA), CA, 90095, USA 2 David Geffen School of Medicine, University of California, Los Angeles (UCLA), CA, 90095, USA |
| Pseudocode | Yes | Figure 1: Pseudo-code for Confident Match. |
| Open Source Code | No | The paper does not contain any statements about making its source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Experiments were conducted using the UNOS database for patients who underwent a heart transplant over the years from 1987 to 2015 (Cecka 1996). |
| Dataset Splits | Yes | Of the 56,716 recipient-donor pairs, 37,677 pairs (66.43%) were used for training, 9,419 pairs (16.61%) were used for validating and 9,620 pairs (16.96%) were used for testing. |
| Hardware Specification | Yes | The execution time of the Confident Match on this dataset is less than 5 hours on MATLAB R2015a with Intel i5 (1.5GHz) processor with 4 GB RAM. |
| Software Dependencies | Yes | The execution time of the Confident Match on this dataset is less than 5 hours on MATLAB R2015a with Intel i5 (1.5GHz) processor with 4 GB RAM. |
| Experiment Setup | No | The paper states that 'The validation set calibrated all parameters of Confident Match (α) and the benchmark algorithms,' but it does not provide the specific numerical values of these parameters or other detailed training configurations (e.g., learning rates, batch sizes, number of epochs) in the main text. |