Transfer Learning via Optimal Transportation for Integrative Cancer Patient Stratification

Authors: Ziyu Liu, Wei Shao, Jie Zhang, Min Zhang, Kun Huang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the stratification performance on three early-stage cancers from the Cancer Genome Atlas (TCGA) project. Comparing with other benchmark methods, our framework achieves superior accuracy for patient outcome prediction.
Researcher Affiliation Academia 1Department of Statistics, Purdue University 2Biostatistics and Health Data Science, Indiana University School of Medicine 3Department of Medical and Molecular Genetics, Indiana University School of Medicine 4Regenstrief Institute, Indianapolis {liu2301, minzhang}@purdue.edu, {shaowei, jizhan, kunhuang}@iu.edu
Pseudocode Yes Algorithm 1 Multi-view transfer learning via Optimal Transport
Open Source Code No The paper does not provide an explicit statement or a link to the open-source code for the methodology described in this paper. It mentions a third-party library 'Pot: Python optimal transport' but not its own implementation.
Open Datasets Yes We focus on three cancer types including breast invasive carcinoma (BRCA), kidney renal papillary cell carcinoma (KIRP), and lung squamous cell carcinoma (LUSC). For each cancer type, we select early-stage (stages I and II) patients with matched gene expression data, histopathological images, and clinical outcomes. We apply the MVTOT method to integrate eigen-genes and tissue morphological features and transfer knowledge from one cancer type to the other.
Dataset Splits No The paper mentions 'training set' for benchmark methods and 'validation' in an abstract sense for algorithm convergence, but it does not specify explicit train/validation/test dataset splits (e.g., percentages, sample counts, or predefined splits) for its own experimental setup on the TCGA dataset.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper mentions software tools like 'Pot: Python optimal transport' and 'i Cluster', but it does not provide specific version numbers for these or any other software dependencies.
Experiment Setup No The paper lists hyper-parameters (K, α, β, γ1, γ2, ϵ) for the algorithm but does not provide their specific values or other concrete experimental setup details such as optimizer settings, learning rates, or number of epochs.