Learning Physical Models that Can Respect Conservation Laws

Authors: Derek Hansen, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, Michael W. Mahoney

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we provide an empirical evaluation to illustrate the main aspects of our proposed framework PROBCONSERV. Unless otherwise stated, we use the limiting solution described in Equation 8, with σG = 0, so that conservation is enforced exactly through the integral form of the PDE. We organize our empirical results around the following questions: 1. Integral vs. differential form? 2. Strong control on the enforcement of the conservation constraint? 3. Easy to hard PDEs? 4. Uncertainty Quantification (UQ) for downstream tasks?
Researcher Affiliation Collaboration 1Dept. of Statistics, University of Michigan, Ann Arbor, MI, USA (Work done during an internship at AWS AI Labs.) 2AWS AI Labs, Santa Clara, CA, USA 3Amazon Supply Chain Optimization Technologies, New York, NY, USA.
Pseudocode Yes Algorithm 1 PROBCONSERV
Open Source Code Yes The code is available at https://github.com/ amazon-science/probconserv.
Open Datasets No For each PDE instance, we first generate training data for the data-driven model in Step 1. We generate these samples, indexed by i, by randomly sampling ntrain values of the PDE parameters αi from an interval A. To create the input data Di, the solution profile corresponding to αi is evaluated on a set of ND points uniformly sampled from the spatiotemporal domain [0, t] Ω. Then, the reference solution for u with parameter αi, denoted ui, is evaluated over another set of Ntrain uniformly-sampled points.
Dataset Splits No The paper describes training and test data generation and settings (Table 6 and Table 7) but does not explicitly mention a validation dataset split.
Hardware Specification No The paper does not contain any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software like the ADAM optimizer and the ANP model, but it does not specify version numbers for these or other software dependencies.
Experiment Setup Yes Specifically, we use the ADAM optimizer with a learning rate of 1 10 4 and a batch size of 250.