Learning Survival Distribution with Implicit Survival Function

Authors: Yu Ling, Weimin Tan, Bo Yan

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that ISF outperforms the state-of-the-art methods in three public datasets and has robustness to the hyperparameter controlling estimation precision. To demonstrate performance of the proposed model compared with the state-of-the-art methods, experiments are built on several real-world datasets. Experimental results show that ISF outperforms the state-of-the-art methods.
Researcher Affiliation Academia School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Shanghai Collaborative Innovation Center of Intelligent Visual Computing, Fudan University, Shanghai, China. yling21@m.fudan.edu.cn, {wmtan, byan}@fudan.edu.cn
Pseudocode No The paper does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present structured algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/Bcai0797/ISF.
Open Datasets Yes CLINIC tracks patients clinic status [Knaus et al., 1995]. MUSIC is a user lifetime analysis containing about 1000 users with entire listening history [Jing and Smola, 2017]. METABRIC dataset contains gene expression profiles and clinical features of the breast cancer from 1,981 patients [Curtis et al., 2012].
Dataset Splits Yes The training and testing split of CLINIC and MUSIC follows the setting of DRSA [Ren et al., 2019]. For METABRIC, 5-fold cross validation is applied following Deep Hit [Lee et al., 2018].
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., CPU, GPU models, or cloud computing instances with their specifications).
Software Dependencies No The paper states 'ISF is implemented with Py Torch.' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Number of hidden units of E( ) defined in Eq. 10 and H( ) defined in Eq. 11 are corresponding set as {256, 512, 256} and {256, 256, 1} for all experiments. During training, we perform Adam optimizer. Models of the best CI is selected with variation in hyperparameters of learning rate {10 3, 10 4, 10 5}, weight of decay {10 3, 10 4, 10 5} and batch size {8, 16, 32, 64, 128, 256}.