Fourier Neural Operator for Parametric Partial Differential Equations
Authors: Zongyi Li, Nikola Borislavov Kovachki, Kamyar Azizzadenesheli, Burigede liu, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on Burgers equation, Darcy flow, and Navier-Stokes equation. |
| Researcher Affiliation | Academia | Zongyi Li zongyili@caltech.edu Nikola Kovachki nkovachki@caltech.edu Kamyar Azizzadenesheli kamyar@purdue.edu Burigede Liu bgl@caltech.edu Kaushik Bhattacharya bhatta@caltech.edu Andrew Stuart astuart@caltech.edu Anima Anandkumar anima@caltech.edu |
| Pseudocode | No | The paper describes the architecture and methodology verbally and with diagrams (Figure 2), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the release of open-source code for the described methodology. |
| Open Datasets | No | The paper describes generating synthetic datasets for each problem (Burgers, Darcy, Navier-Stokes) with specific generation procedures. It does not provide concrete access information (link, DOI, formal citation) to a publicly available or open dataset. |
| Dataset Splits | No | The paper states 'Unless otherwise specified, we use N = 1000 training instances and 200 testing instances.' but does not explicitly mention a validation set or its split. |
| Hardware Specification | Yes | All the computation is carried on a single Nvidia V100 GPU with 16GB memory. |
| Software Dependencies | No | The paper mentions general components like 'Adam optimizer' and 'Re LU activation' but does not specify any software libraries with version numbers (e.g., PyTorch 1.x, Python 3.x, CUDA x.x). |
| Experiment Setup | Yes | We construct our Fourier neural operator by stacking four Fourier integral operator layers as specified in (2) and (4) with the Re LU activation as well as batch normalization. Unless otherwise specified, we use N = 1000 training instances and 200 testing instances. We use Adam optimizer to train for 500 epochs with an initial learning rate of 0.001 that is halved every 100 epochs. We set kmax,j = 16, dv = 64 for the 1-d problem and kmax,j = 12, dv = 32 for the 2-d problems. |