Multiwavelet-based Operator Learning for Differential Equations
Authors: Gaurav Gupta, Xiongye Xiao, Paul Bogdan
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on the Korteweg-de Vries (Kd V) equation, Burgers equation, Darcy Flow, and Navier-Stokes equation. |
| Researcher Affiliation | Academia | Gaurav Gupta, Xiongye Xiao, Paul Bogdan Ming Hsieh Department of Electrical and Computer Engineering University of Southern California, Los Angeles, CA 90089 |
| Pseudocode | No | The paper includes a diagram of the model architecture in Figure 2, but it does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for reproducing the experiments is available at: https://github.com/gaurav71531/mwt-operator. The code is uploaded with the supplementary materials. |
| Open Datasets | Yes | Unless stated otherwise, the training set is of size 1000 while test is of size 200. A part of the datasets are taken from the FNO work [47], while some are generated using the scripts provided by the same authors. We have properly cited the work in Section 3 Benchmark models. |
| Dataset Splits | No | The paper states the size of the training and test sets ("training set is of size 1000 while test is of size 200") but does not explicitly provide details for a validation set split (e.g., its size or percentage). |
| Hardware Specification | Yes | All of the experiments are performed on a single Nvidia V100 32 GB GPU |
| Software Dependencies | No | The paper mentions the use of 'chebfun package [27]' for numerical solutions but does not provide specific version numbers for this or any other software dependencies crucial for reproducibility. |
| Experiment Setup | Yes | All the models (including ours) are trained for a total of 500 epochs using Adam optimizer with an initial learning rate (LR) of 0.001. The LR decays after every 100 epochs with a factor of γ = 0.5. The loss function is taken as relative L2 error [47]. All of the experiments are performed on a single Nvidia V100 32 GB GPU, and the results are averaged over a total of 3 seeds. |