Dynamic Survival Analysis with Controlled Latent States

Authors: Linus Bleistein, Van Tuan Nguyen, Adeline Fermanian, Agathe Guilloux

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide theoretical learning guarantees for both estimators, before showcasing the performance of our models on a vast array of simulated and real-world datasets from finance, predictive maintenance and food supply chain management. (...) 4. Experimental Evaluation
Researcher Affiliation Collaboration 1Inria Paris, F-75015 Paris, France 2Centre de Recherche des Cordeliers, INSERM, Universit e de Paris, Sorbonne Universit e, F-75006 Paris, France 3La MME, UEVE and UMR 8071, Paris Saclay University, F-91042, Evry, France 4LOPF, Califrais Machine Learning Lab, Paris, France 5Laboratoire de Probabilit es, Statistique et Mod elisation, LPSM, Univ. Paris Cit e, F-75005, Paris, France.
Pseudocode No The paper does not contain any explicit pseudocode or algorithm blocks.
Open Source Code Yes All proofs are given in the appendix. The code is available at https://github. com/Linus Bleistein/signature_survival.
Open Datasets Yes Predictive Maintenance. (Saxena et al., 2008) This dataset collects simulations of measurements of sensors placed on aircraft gas turbine engines run until a threshold value is reached.
Dataset Splits Yes For the experiments, each dataset is randomly divided into a training set (80%) and test set (20%). Hyperparameter optimization is performed as follows. We split the training set, using 4/5 for training and 1/5 for validation. We then re-fit on the whole training set with the best hyperparameters and report the results on the test set for 10 runs.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper mentions using specific software packages like 'iisignature', 'Pytorch', 'scikit-survival', and 'Adam optimizer', but it does not provide specific version numbers for any of these dependencies.
Experiment Setup Yes The initial learning rate of the proximal gradient algorithm is set to e 3 and the learning rate for each iteration is chosen by back tracking linesearch method. The hyperparameters of penalization strength (η1, η2) and truncation depth N are chosen by 1-fold cross-validation of a mixed metric equal to the difference between the C-index and the Brier score. We select the best hyperparameters that minimize the average of this mixed metric on the validation set. We list the hyperparameters search space of this algorithm below. η1: {1, e 1, e 2, e 3, e 4, e 5}; η2: {1, e 1, e 2, e 3, e 4, e 5}; N: {2, 3}. (...) The model was trained for 50 epochs using the Adam optimizer (Kingma & Ba, 2015) with a batch size of 32 and cross-validated learning rate set to e 4.