PICProp: Physics-Informed Confidence Propagation for Uncertainty Quantification

Authors: Qianli Shen, Wai Hoh Tang, Zhun Deng, Apostolos Psaros, Kenji Kawaguchi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide a theorem regarding the validity of our method, and computational experiments, where the focus is on physics-informed learning.
Researcher Affiliation Academia Qianli Shen National University of Singapore Singapore shenqianli@u.nus.edu Wai Hoh Tang National University of Singapore Singapore waihoh.tang@nus.edu.sg Zhun Deng Columbia University USA zhun.d@columbia.edu Apostolos Psaros Brown University USA a.psaros@brown.edu Kenji Kawaguchi National University of Singapore Singapore kenji@nus.edu.sg
Pseudocode Yes The developed methods are summarized in Algorithms 1 and 2.
Open Source Code Yes Code is available at https://github.com/Shen Qianli/PICProp.
Open Datasets No The paper generates synthetic data with specified noise distributions (e.g., Gaussian, uniform) for its experiments, rather than using a publicly available dataset with concrete access information.
Dataset Splits Yes In the rest of the examples, we split 10% of the training data as the validation set to select the best λ from {0.0, 0.25, 0.5, 0.75, 1.0}.
Hardware Specification No The paper mentions support from "Google Cloud Research Credit program" and "National Supercomputing Centre, Singapore", and refers to "GPU" in Table J.2, but does not specify exact GPU models, CPU models, or detailed hardware specifications.
Software Dependencies No The paper mentions some software implicitly (e.g., PINN, Neural UQ) but does not provide specific version numbers for any libraries, frameworks, or solvers that would be necessary for reproduction.
Experiment Setup Yes Table J.1: Summary of implementation details, and utilized hyperparameters and architectures in the experiments. (Including MLP arch., inner/meta optimizer, inner/meta lr, hypergrad method, warmup/inner/meta steps, and lambda)