Guiding continuous operator learning through Physics-based boundary constraints

Authors: Nadim Saad, Gaurav Gupta, Shima Alizadeh, Danielle C. Maddix

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments based on multiple PDEs with a wide variety of applications indicate that the proposed approach ensures satisfaction of BCs, and leads to more accurate solutions over the entire domain. The proposed correction method exhibits a (2X-20X) improvement over a given operator model in relative L2 error (0.000084 relative L2 error for Burgers equation). Code available at: https://github.com/amazon-science/boon.
Researcher Affiliation Collaboration Nadim Saad 1, , , Gaurav Gupta 2, , Shima Alizadeh 2, Danielle C. Maddix 2 1 Stanford University (450 Serra Mall, Stanford, CA 94305) 2 AWS AI Labs (2795 Augustine Dr, Santa Clara, CA 95054)
Pseudocode Yes Algorithm 1: Dirichlet Input: K, u0, αD(x0, t) Output: Corrected Dir u(t) ... Algorithm 2: Neumann Input: K, u0, αN(x0, t), c RN, c0 = 0 Output: Corrected Neu u(t) ... Algorithm 3: Periodic Input: K, u0, α, β R+, α + β = 1 Output: Corrected Per u(t)
Open Source Code Yes Code available at: https://github.com/amazon-science/boon.
Open Datasets No The paper describes its own data generation process based on PDE solutions but does not provide concrete access information (link, DOI, specific citation) to a publicly available dataset they used.
Dataset Splits No The paper defines 'Dtrain' and 'Dtest' with specific sizes (ntrain and ntest in Table 7) but does not mention a separate validation dataset split.
Hardware Specification Yes We use a p3.8xlarge Amazon Sagemaker instance (Liberty et al., 2020) in the experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and various neural operator models (FNO, PINO, MGNO, Deep ONet) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The models are trained for a total of 500 epochs using Adam optimizer with an initial learning rate of 0.001. The learning rate decays every 50/100 epochs (1D/rest) with a factor of 0.5. We use the relative-L2 error as the loss function (Li et al. (2020a)). We compute our BOON model by applying kernel corrections (see Section 3) on four stacked Fourier integral operator layers with Ge LU activation (Li et al. (2020a)) (See Table 8).