Forecasting Treatment Responses Over Time Using Recurrent Marginal Structural Networks

Authors: Bryan Lim

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using simulations of a state-of-the-art pharmacokinetic-pharmacodynamic (PK-PD) model of tumor growth [12], we demonstrate the ability of our network to accurately learn unbiased treatment responses from observational data even under changes in the policy of treatment assignments and performance gains over benchmarks.
Researcher Affiliation Academia Bryan Lim Department of Engineering Science University of Oxford bryan.lim@eng.ox.ac.uk Ahmed Alaa Electrical Engineering Department University of California, Los Angeles ahmedmalaa@ucla.edu Mihaela van der Schaar University of Oxford and The Alan Turing Institute mschaar@turing.ac.uk
Pseudocode No The paper describes the training procedure verbally in Section 4.3 and illustrates it with Figure 3, but it does not provide structured pseudocode or an algorithm block.
Open Source Code Yes with the source code uploaded onto Git Hub1. 1https://github.com/sjblim/rmsn_nips_2018
Open Datasets No As confounding effects in real-world datasets are unknown a priori, methods for treatment response estimation are often evaluated using data simulations, where treatment application policies are explicitly modeled [34, 33, 35]. To ensure that our tests are fully reproducible and realistic from a medical perspective, we adopt the pharmacokinetic-pharmacodynamic (PK-PD) model of [12].
Dataset Splits Yes Using the simulation model in Section 5.1, we simulated 10,000 paths to be used for model training, 1,000 for validation data used in hyperparameter optimization, and another 1,000 for out-of-sample testing.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions software components and algorithms such as 'LSTMs', 'ELU [7]', and 'Adam [20]' but does not provide specific version numbers for any libraries, frameworks, or programming languages used.
Experiment Setup Yes For the continuous predictions in Section 5, we used Exponential Linear Unit (ELU [7]) state activations and a linear output layer. ... LSTMs were fitted with tanh state activations and sigmoid outputs. ... propensity networks were trained using standard binary cross entropy loss. ... the loss function for the encoder as a weighted mean-squared error loss (Lencoder in Equation 5). ... For continuous predictions, the loss function for the decoder is (Ldecoder) can also be found in Equation 5. ... observations were batched into shorter sequences of up to τmax steps.