Explainable Survival Analysis with Convolution-Involved Vision Transformer
Authors: Yifan Shen, Li Liu, Zhihao Tang, Zongyi Chen, Guixiang Ma, Jiyan Dong, Xi Zhang, Lin Yang, Qingfeng Zheng2207-2215
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations on two large cancer datasets show that our proposed model is more effective and has better interpretability for survival prediction. |
| Researcher Affiliation | Academia | 1Key Laboratory of Trustworthy Distributed Computing and Service (Mo E), Beijing University of Posts and Telecommunications, China 2National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College 3University of Illinois at Chicago |
| Pseudocode | No | The paper describes the model architecture and methodology in detail using text and diagrams, but does not provide pseudocode or an algorithm block. |
| Open Source Code | No | We would make the code publicly available upon acceptance. |
| Open Datasets | Yes | We use two datasets to evaluate the performance of our model. One is a public National Lung Screening Trial (Team 2011) (NLST) dataset collected by the National Cancer Institute s Division of Cancer Prevention (DCP) and Division of Cancer Treatment and Diagnosis (DCTD), which can be downloaded from Internet via application. |
| Dataset Splits | Yes | We split the NLST dataset and CHCAMS dataset into training, validation, and testing set with a split ratio of 8:1:1. |
| Hardware Specification | Yes | All the experiments run on NVIDIA V100 GPU. |
| Software Dependencies | No | All other methods are built using the functions from the lifelines package, which is a survival analysis library available on Github 1. |
| Experiment Setup | Yes | For training, the parameters are optimized using the Adam algorithm, where the learning rate is initialized at 0.01. We set the dimension of the hidden feature vector as 256. The batch size is set to 64. The training process is iterated upon 1000 epochs. The patch size is set to 16 16. |