Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease Progression

Authors: Zhaozhi Qian, William Zame, Lucas Fleuren, Paul Elbers, Mihaela van der Schaar

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated LHM on synthetic data as well as real-world intensive care data of COVID-19 patients. LHM consistently outperforms previous works, especially when few training samples are available such as at the beginning of the pandemic.
Researcher Affiliation Collaboration Zhaozhi Qian University of Cambridge zq224@cam.ac.uk William R. Zame UCLA zame@econ.ucla.edu Lucas M. Fleuren Amsterdam UMC l.fleuren@amsterdamumc.nl Paul Elbers Amsterdam UMC p.elbers@amsterdamumc.nl Mihaela van der Schaar University of Cambridge UCLA The Alan Turing Institute mv472@cam.ac.uk
Pseudocode No The paper includes 'Figure 2: Illustration of the training and prediction procedure' which shows a diagram of steps, but it does not provide formal pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The implementation of LHM and the experiment code are available at https://github.com/Zhaozhi QIAN/Hybrid-ODE-Neur IPS-2021 or https://github.com/ orgs/vanderschaarlab/repositories
Open Datasets Yes We used data from the Dutch Data Warehouse (DDW), a multicenter and full-admission anonymized electronic health records database of critically ill COVID-19 patients [27].
Dataset Splits Yes We partition each dataset into a training set, a validation set, and a testing set. We consider training sets consisting of N0 = 10, 100, or 500 data points; each validation set has 100 data points and each testing set has 1000 data points.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or cloud computing specifications.
Software Dependencies No The paper mentions the use of an 'Adam' optimizer, but it does not specify version numbers for any software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or specific libraries.
Experiment Setup No The paper states, 'The details of the optimization and hyper-parameter settings are reported in Appendix A.4.' However, Appendix A.4 is not included in the provided text of the research paper, so specific experimental setup details like hyperparameter values are not available.