Disease Trajectory Maps

Authors: Peter Schulam, Raman Arora

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories and that the representations are significantly associated with important clinical outcomes.
Researcher Affiliation Academia Peter Schulam Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 pschulam@cs.jhu.edu Raman Arora Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 arora@cs.jhu.edu
Pseudocode No The paper describes the learning and inference algorithms in text (e.g., 'We describe a stochastic variational inference algorithm'), but does not provide a formal pseudocode block or algorithm listing.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets No The paper uses data from 'the Johns Hopkins Hospital Scleroderma Center’s patient registry', which is an internal institutional dataset. No information on its public availability or access is provided. For example: 'We extract trajectories from the Johns Hopkins Hospital Scleroderma Center’s patient registry; one of the largest in the world.'
Dataset Splits Yes We present held-out data log-likelihoods in Table 1, which are estimated using 10-fold cross-validation.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., CPU, GPU, memory, or cloud instance types).
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes For all experiments and all models, we use a common 5-dimensional B-spline basis composed of degree-2 polynomials (see e.g. Chapter 20 in Gelman et al. [2014]). We choose knots using the percentiles of observation times across the entire training set [Ramsay et al., 2002]. ... For both PFVC and TSS, we use minibatches of size 25 and learn for a total of five epochs (passes over the training data). The initial learning rate for m and S is 0.1 and decays as t 1 for each epoch t.