Hybrid$^2$ Neural ODE Causal Modeling and an Application to Glycemic Response
Authors: Bob Junyi Zou, Matthew E Levine, Dessi P. Zaharieva, Ramesh Johari, Emily Fox
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our ability to achieve a win-win, state-of-the-art predictive performance and causal validity, in the challenging task of modeling glucose dynamics post-exercise in individuals with type 1 diabetes. Our experiments illustrate a win-win where across a wide range of settings of α our state-of-the-art predictive performance does not drop while the causal validity dramatically improves. |
| Researcher Affiliation | Collaboration | 1Institute for Computational and Mathematical Engineering, Stanford University 2Broad Institute of MIT and Harvard 3Department of Pediatrics, Stanford University 4Department of Management Science and Engineering, Stanford University 5Department of Statistics and Department of Computer Science, Stanford University 6Chan Zuckerberg Biohub San Francisco. |
| Pseudocode | Yes | Algorithm 1 UVA Padova Model... Algorithm 10 Repeated Nested Cross Validation |
| Open Source Code | Yes | Our implementation of H2NCM is available at https://github.com/bobjz/H2NCM. |
| Open Datasets | Yes | Our data come from the Type 1 Diabetes Exercise Initiative (T1DEXI) (Riddell et al., 2023), which can be requested via https://doi.org/10.25934/PR00008428. |
| Dataset Splits | Yes | To tune our models and compute test error, due to the small dataset size, we use repeated nested cross validation (CV) with 3 repeats, 6 outer folds and 4 inner folds. The inner folds are used to tune hyperparameters and outer folds to estimate generalization error; results are averaged over three runs for which the sequences are randomly shuffled. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like "Adam optimizer (Kingma & Ba, 2015)" and "Py Torch default setting" but does not specify version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | Set-up For each model mentioned in the experiment section, here we offer a detailed description of the corresponding computational method and the hyperparameters used. Learning Rate, Initialization and Optimizer... Training Epochs... Dropout... Hyperparameter Search... |