Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Bayesian Calibration of Imperfect Computer Models using Physics-Informed Priors
Authors: Michail Spitieris, Ingelin Steinsland
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our approach is demonstrated in simulation and real data case studies where the physics are described by time-dependent ODEs (cardiovascular models) and space-time dependent PDEs (heat equation). In the studies, it is shown that our modelling framework can recover the true parameters of the physical models in cases where 1) the reality is more complex than our modelling choice and 2) the data acquisition process is biased while also producing accurate predictions. Furthermore, it is demonstrated that our approach is computationally faster than traditional Bayesian calibration methods. |
| Researcher Affiliation | Academia | Michail Spitieris EMAIL Ingelin Steinsland EMAIL Department of Mathematical Sciences NTNU (Norwegian University of Science and Technology) 7491 Trondheim, Norway |
| Pseudocode | No | The paper provides mathematical derivations and descriptions of the methods but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code to replicate all the results in the paper is available at https://github.com/Mi Spitieris/BC-with-PI-priors. |
| Open Datasets | Yes | This case study is based on observations of blood flow and blood pressure from one individual that took part in a randomized controlled trial described in Øyen (2020). |
| Dataset Splits | No | The paper describes generating synthetic data and observing real data (e.g., 'we simulate 35 data points for u(t, x) and 20 data points for f(t, x)', 'We use three cycles for both pressure and flow'). However, it does not specify traditional dataset splits for training, validation, or testing in a machine learning context. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, or cloud instances) used for running the experiments or simulations. |
| Software Dependencies | No | The paper mentions: "In this paper, we use Hamiltonian Monte Carlo (HMC) sampling and, more specifically, the No U-Turn Sampler (NUTS) (Hoffman et al., 2014) variation implemented in the probabilistic programming language STAN (Carpenter et al., 2017)." However, it does not specify version numbers for STAN or any other software components. |
| Experiment Setup | Yes | We assign priors to the physical model parameters φ that reflect underlying scientific knowledge and also assign priors to the mean, kernel and noise parameters. For convenience, we denote all the parameters collectively ξ = (φ, β, θ, σu, σf). Furthermore, uniform priors are assigned on the physical parameters of interest on a range of reasonable values, R, C U(0.5, 3) and also weakly informative priors to the other model hyperparameters (see Appendix B.1, WK2 model). Eight inducing points for the blood pressure, m P = 8 and ten inducing points for the blood inflow, m Q = 10 are used for both the FITC and VFE approximations. |