Inverse-Weighted Survival Games

Authors: Xintian Han, Mark Goldstein, Aahlad Puli, Thomas Wies, Adler Perotte, Rajesh Ranganath

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data.
Researcher Affiliation Academia Xintian Han NYU xintian.han@nyu.edu Mark Goldstein NYU goldstein@nyu.edu Aahlad Puli NYU aahlad@nyu.edu Thomas Wies NYU wies@cs.nyu.edu Adler J. Perotte Columbia University adler.perotte@columbia.edu Rajesh Ranganath NYU rajeshr@cims.nyu.edu
Pseudocode Yes Algorithm 1 Following Gradients in Summed Games
Open Source Code Yes Code is available at https://github.com/rajesh-lab/Inverse-Weighted-Survival-Games
Open Datasets Yes Data. Survival-MNIST [Gensheimer, 2019, Pölsterl, 2019] draws times conditionally on MNIST label Y. ... We use several datasets used in recent papers [Chen, 2020, Kvamme et al., 2019] and available in the python packages Deep Surv [Katzman et al., 2018] and Py Cox [Kvamme et al., 2019], and the R Survival [Therneau, 2021]. The datasets are: Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) [Curtis et al., 2012] Rotterdam Tumor Bank (ROTT) [Foekens et al., 2000] and German Breast Cancer Study Group (GBSG) [Schumacher et al., 1994] combined into one dataset (ROTT. & GBSG) Study to Understand Prognoses Preferences Outcomes and Risks of Treatment (SUPPORT) [Knaus et al., 1995]
Dataset Splits Yes For all datasets, we created a random 80/10/10 train/validation/test split for training and evaluation.
Hardware Specification Yes All models were trained on a single NVIDIA Quadro RTX 8000 GPU
Software Dependencies No The paper mentions 'Adam optimizer' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Training was performed using Adam optimizer [Kingma and Ba, 2015] with a learning rate of 0.001. ... We used a batch size of 256 for all datasets except METABRIC for which we used a batch size of 128. ... All models were trained for 500 epochs with early stopping based on the validation set negative log likelihood...