Deep Recurrent Survival Analysis

Authors: Kan Ren, Jiarui Qin, Lei Zheng, Zhengyu Yang, Weinan Zhang, Lin Qiu, Yong Yu4798-4805

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments on the three realworld tasks from different fields, our model significantly outperforms the state-of-the-art solutions under various metrics.
Researcher Affiliation Academia Kan Ren, Jiarui Qin, Lei Zheng, Zhengyu Yang, Weinan Zhang, Lin Qiu, Yong Yu APEX Data & Knowledge Management Lab Shanghai Jiao Tong University kren, qinjr, zhenglei, zyyang, wnzhang, lqiu, yyu@apex.sjtu.edu.cn
Pseudocode No The paper includes "Figure 1: Detailed illustration of Deep Recurrent Survival Analysis (DRSA) model." which is a diagram, not structured pseudocode or an algorithm block.
Open Source Code Yes We also published the implementation code for reproductive experiments1. 1Reproductive code link: https://github.com/rk2900/drsa.
Open Datasets Yes We evaluate our model with strong baselines in three real-world tasks. We also published the processed full datasets2. 2We have put sampled data in the published code. The three processed full datasets link: https://goo.gl/n UFND4.
Dataset Splits No We split the CLINIC and MUSIC datasets to training and test sets with ratio of 4:1 and 6:1, respectively. The paper does not explicitly mention a validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only discusses the model and its performance.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup No The paper mentions "hyperparameter α controls the loss value balance between them" and "The discussion about various interval sizes has been included in the supplemental materials." However, it does not explicitly state concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text.