Multi-Source Survival Domain Adaptation
Authors: Ammar Shaker, Carolin Lawrence
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on two cancer data sets reveal a superb performance on target domains, a better treatment recommendation, and a weight matrix with a plausible explanation. |
| Researcher Affiliation | Industry | Ammar Shaker, Carolin Lawrence NEC Laboratories Europe GmbH, Heidelberg, Germany {Ammar.Shaker,Carolin.Lawrence}@neclab.eu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For both MSSDA4 and Deep Surv, we use the same architecture, a two-layered feature extractor with 200 and 20 units in the first and second hidden layers, respectively. The detailed architecture and the hyper-parameters search are explained in the supplementary material. We model the logrisk function as the non-linear function h ϕθ(x) learned by the fitted network architecture, i.e., r(x) = eh ϕθ(x). MSSDA and Deep Surv are trained for 20 epochs. 4https://github.com/shaker82/MSSDA |
| Open Datasets | Yes | Datasets. We utilize two data sets from The Cancer Genome Atlas project (TCGA)2. This project analyzes the molecular profiles and the clinical data of 33 cancer types. (i) The Messenger RNA data (m RNA) (Li et al. 2016b), which includes eight cancer types. Each patient is represented by 19171 binary features; see Table 1. (ii) The micro-RNA data (mi RNA) that includes 21 cancer types (Wang et al. 2017); each has a varying number of patients. Table 2 depicts the total number of patients for each cancer and the number of patients that experienced the event (died) during the time of the clinical study (δ = 1). We also extract the treatment performed for each cancer type (if available). https://www.cancer.gov/about-nci/organization/ccg/research/ structural-genomics/tcga |
| Dataset Splits | Yes | All results are averaged over five folds. In the supervised target case, we allow a small portion of the target domain to be labeled and used for training. We use these percentages, 5%, 10%, 15%, 20%, and 25%. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like PyTorch and pysurvival but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | For both MSSDA4 and Deep Surv, we use the same architecture, a two-layered feature extractor with 200 and 20 units in the first and second hidden layers, respectively. [...] MSSDA and Deep Surv are trained for 20 epochs. |