MetaPhysiCa: Improving OOD Robustness in Physics-informed Machine Learning

Authors: S Chandra Mouli, Muhammad Alam, Bruno Ribeiro

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using three different OOD tasks, we empirically observe that the proposed approach significantly outperforms existing state-of-the-art PIML and deep learning methods (with 2 to 28 lower OOD errors). We report normalized root mean squared errors (i.e., RMSE normalized with ground truth standard deviation) for the in-distribution (ID), OOD w.r.t. Xt0, and OOD w.r.t. Xt0 and W in Tables 1, 2 and 3 (latter two in appendix) for the 3 datasets.
Researcher Affiliation Academia S Chandra Mouli Department of Computer Science Purdue University, Indiana, US Muhammad Ashraful Alam Department of Electrical and Computer Engineering, Purdue University, Indiana, US Bruno Ribeiro Department of Computer Science Purdue University, Indiana, US
Pseudocode No The paper describes the methodology using text and equations but does not include structured pseudocode or algorithm blocks (e.g., a figure or section explicitly labeled 'Algorithm' or 'Pseudocode').
Open Source Code Yes Code is available at https://github.com/PurdueMINDS/MetaPhysiCa
Open Datasets No For each dynamical system, we simulate the respective ODE as per Definition 1 to generate M = 1000 training tasks each observed over regularly-spaced discrete time steps t = 0, ..., t T with t = 0.1. The paper describes a custom data generation process and does not provide public access information (link, DOI, specific citation for dataset download) for the generated datasets.
Dataset Splits Yes We choose the hyperparameters that result in sparsest model (i.e., with the least ||ˆΦ||0) while achieving validation loss within 5% of the best validation loss in held-out in-distribution validation data. For each dynamical system, we simulate the respective ODE as per Definition 1 to generate M = 1000 training tasks...At OOD test, we generate M = 200 test tasks.
Hardware Specification No The paper states 'Computing infrastructure was supported in part by CNS-1925001 (Cloud Bank)' but does not provide specific hardware details such as GPU or CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions software components like 'ODE solver (dopri5)', 'GRU', and 'Ada IN' but does not specify version numbers for these or any other software dependencies, making the setup not fully reproducible in terms of exact software environment.
Experiment Setup Yes We perform a grid search over the following hyperparameters: regularization strengths λΦ {10 4, 10 3, 5 10 3, 10 2}, λREx {0, 10 2, 10 1, 1, 10}, and learning rates η {10 2, 10 3, 10 4}.