Temporal Phenotyping using Deep Predictive Clustering of Disease Progression

Authors: Changhee Lee, Mihaela Van Der Schaar

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide a set of experiments using two real-world time-series datasets.
Researcher Affiliation Academia 1University of California, Los Angeles, USA 2University of Cambridge, UK 3Alan Turing Institute, UK.
Pseudocode Yes Pseudo-code of AC-TPC can be found in the Supplementary Material.
Open Source Code Yes Source code available at https://github.com/chl8856/AC_TPC.
Open Datasets Yes UK Cystic Fibrosis registry (UKCF)3: https://www.cysticfibrosis.org.uk
Dataset Splits Yes Here, all the results are reported using 5 random 64/16/20 train/validation/test splits.
Hardware Specification No The paper describes network architectures and optimization parameters (e.g., LSTM with 50 nodes, Adam optimizer), but does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions using LSTM, Xavier initialization, and Adam optimizer, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For the network architecture, we constructed the encoder utilizing a single-layer LSTM (Hochreiter & Schmidhuber, 1997) with 50 nodes and constructed the selector and predictor utilizing two-layer fully-connected network with 50 nodes in each layer, respectively. The parameters (θ, ψ, φ) are initialized by Xavier initialization (Glorot & Bengio, 2010) and optimized via Adam optimizer (Kingma & Ba, 2014) with learning rate of 0.001 and keep probability 0.7. We chose the balancing coefficients α, β {0.001, 0.01, 0.1, 1.0} utilizing grid search that achieves the minimum validation loss in (2).