Individualized Dosing Dynamics via Neural Eigen Decomposition

Authors: Stav Belogolovsky, Ido Greenberg, Danny Eytan, Shie Mannor

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the robustness of NESDE in both synthetic and real medical problems, and use the learned dynamics to publish simulated medical gym environments. In Sections 5,6, NESDE is tested on synthetic and real medical data.
Researcher Affiliation Collaboration Department of Electrical and Computer Engineering, Technion Israel Institute of Technology, Haifa, Israel Department of Physiology and Biophysics, Faculty of Medicine, Technion Israel Institute of Technology, Haifa, Israel Nvidia Research
Pseudocode Yes Algorithm 1 NESDE
Open Source Code No The paper does not include a direct link to the source code for the NESDE methodology or an explicit statement that the code is publicly available.
Open Datasets Yes The benchmarks in this section were derived from the MIMIC-IV dataset [Johnson et al., 2020]. The MIMIC-IV dataset... available under the Physio Net Credentialed Health Data License.
Dataset Splits Yes For all models, in both domains, we use a 60-10-30 train-validation-test data partition.
Hardware Specification Yes All Heparin and Vancomycin experiments were run on a single Ubuntu machine with eight i9-10900X CPU cores and Nvidia s RTX A5000 GPU.
Software Dependencies No The paper mentions the use of the "standard Adam optimizer [Diederik P. Kingma, 2015]" and discusses LSTM models, but it does not provide specific version numbers for any software libraries or dependencies used in the experiments (e.g., PyTorch, TensorFlow, scikit-learn versions).
Experiment Setup Yes The latent space dimension n and the model-update frequency t are determined as hyperparameters. Then, we use the standard Adam optimizer [Diederik P. Kingma, 2015] to optimize the parameters... The LSTM itself has an input of 19 dimensions... It has a hidden size of 64 and two recurrent layers, with dropout of 0.2.