Neural Pharmacodynamic State Space Modeling

Authors: Zeshan M Hussain, Rahul G. Krishnan, David Sontag

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study SSMPK-PD on three different datasets two here, and on a third semi-synthetic dataset in the appendix. ... Table 1 (top) depicts NELBOs on held-out synthetic data across different models... We report our results in Table 1 (bottom). First, we consistently observe that a high fraction of patient data in the test set are explained better by SSMPK-PD than the corresponding baseline (pairwise comparisons).
Researcher Affiliation Collaboration 1Massachussetts Institute of Technology, CSAIL and IMES, Cambridge, MA 2Microsoft Research New England, Cambridge, MA.
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our model code can be found at https://github.com/clinicalml/ief
Open Datasets Yes Finally, we release a disease progression benchmark dataset called ML-MMRF, comprising a curated, preprocessed subset of data from the Multiple Myeloma Research Foundation Co MMpass study (US National Institutes of Health, and others).
Dataset Splits Yes Our results are obtained using a 75/25 train/test split. To select hyperparameters, we perform 5-fold cross validation on the training set.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU model, CPU type, or memory) used for running the experiments.
Software Dependencies No The paper mentions using ADAM for optimization and PyTorch in the references, but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We learn via: (arg minθ log p(X|U, B; θ)) using ADAM (Kingma & Ba, 2014) with a learning rate of 0.001 for 15000 epochs. L1 or L2 regularization is applied in one of two ways: either we regularize all model parameters (including parameters of inference network), or we regularize all weight matrices except those associated with the attention mechanism. We search over regularization strengths of 0.01, 0.1, 1, 10 and latent dimensions of 16, 48, 64 and 128.