Automatically Learning Hybrid Digital Twins of Dynamical Systems

Authors: Samuel Holt, Tennison Liu, Mihaela van der Schaar

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results reveal that HDTwin Gen produces generalizable, sample-efficient, and evolvable models, significantly advancing DTs efficacy in real-world applications.
Researcher Affiliation Academia Samuel Holt , Tennison Liu & Mihaela van der Schaar DAMTP, University of Cambridge Cambridge, UK {sih31, tl522, mv472}@cam.ac.uk
Pseudocode Yes An overview of our method is presented in Figure 1, with pseudocode in Appendix E.1.
Open Source Code Yes 2Code is available at https://github.com/samholt/HDTwin Gen.
Open Datasets Yes We evaluate against six real-world complex system datasets; where each dataset is either a real-world dataset or has been sampled from an accurate simulator designed by human experts. Three are derived from a state-of-the-art biomedical Pharmacokinetic-Pharmacodynamic (PKPD) model of lung cancer tumor growth, used to simulate the combined effects of chemotherapy and radiotherapy in lung cancer [61] (Equation (2))... We also compare against an accurate and complex COVID-19 epidemic agent-based simulator (COVID-19) [65]... Furthermore, we compare against an ecological model of a microcosm of algae, flagellate, and rotifer populations (Plankton Microcosm) replicating an experimental three-species prey-predator system [66]. Moreover, we also compare against a real-world dataset of hare and lynx populations (Hare-Lynx), replicating predator-prey dynamics [67].
Dataset Splits Yes Here, the outer objective measures the generalization performance, empirically measured on the validation set Lval, while the inner objective measures the fitting error, as evaluated on the training set Ltrain.
Hardware Specification Yes We perform all experiments and training using a single Intel Core i9-12900K CPU @ 3.20GHz, 64GB RAM with an Nvidia RTX3090 GPU 24GB.
Software Dependencies Yes Specifically, we find a top-K, where K = 16 is sufficient. Additionally, we use the LLM of GPT4-1106-Preview, with a temperature of 0.7.
Experiment Setup Yes Specifically, we train the model on the training dataset, using the standard MSE loss Equation (5), optimizing using the Adam optimizer [32]. We use the same optimizer hyperparameters as the black-box neural network method, that of a learning rate of 0.01, with a batch size of 1,000 and early stopping with a patience of 20, and train it for 2,000 epochs to ensure it converges, to ensure fair comparison.