Coupled Multiwavelet Operator Learning for Coupled Differential Equations

Authors: Xiongye Xiao, Defu Cao, Ruochen Yang, Gaurav Gupta, Gengshuo Liu, Chenzhong Yin, Radu Balan, Paul Bogdan

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental According to our experimental results, the proposed model exhibits a 2ˆ 4ˆ improvement relative L2 error compared to the best results from the state-of-the-art models. and In this section, we empirically evaluate the proposed model on famous coupled PDEs such as the Gray-Scott (GS) equations and the non-local mean field game (MFG) problem characterized by coupled PDEs.
Researcher Affiliation Academia 1 University of Southern California, Los Angeles, CA 90089, USA 2 University of Maryland, College Park, MD 20742, USA
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our code to run the experiments can be found at https://github.com/joshuaxiao98/ CMWNO/.
Open Datasets Yes The initial conditions are generated in Gaussian random fields (GRF) according to u0pxq, v0pxq Np0, 74p 72Iq 2.5q with periodic boundary conditions. We also use a different scheme to generate u0pxq by using the smooth random functions (Rand) in chebfun package (Driscoll et al., 2014) which returns a band-limited function defined by a Fourier series with independent random coefficients and To obtain the datasets, we generate ρpx, 0q; ρpx, t 1q by using the random functions in chebfun package with the wavelength parameter γ 0.3; 0.1, respectively.
Dataset Splits No Unless stated otherwise, we train on 1000 samples and test on 200 samples. (No explicit mention of validation split).
Hardware Specification Yes All experiments are done on an Nvidia A100 40GB GPUs.
Software Dependencies No The paper mentions using the 'chebfun package (Driscoll et al., 2014)' for data generation and ReLU activation, but does not provide specific version numbers for these or other software libraries like PyTorch or TensorFlow.
Experiment Setup Yes The neural operators are trained using Adam optimizer with a learning rate of 0.001 and decay of 0.95 after every 100 steps. The models are trained for a total of 500 epochs which is the same with training CMWNO for fair comparison.