Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs
Authors: Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We consider standard PDEs such as the Burgers, Darcy subsurface flow, and the Navier-Stokes equations, and show that the proposed neural operators have superior performance compared to existing machine learning based methodologies, while being several orders of magnitude faster than conventional PDE solvers. Numerical Results In this section, we compare the proposed neural operator with other supervised learning approaches, using the four test problems outlined in Section 6. |
| Researcher Affiliation | Collaboration | Nikola Kovachki EMAIL Nvidia Zongyi Li EMAIL Caltech Burigede Liu EMAIL Cambridge University Kamyar Azizzadenesheli EMAIL Nvidia Kaushik Bhattacharya EMAIL Caltech Andrew Stuart EMAIL Caltech Anima Anandkumar EMAIL Caltech |
| Pseudocode | Yes | We present a V-cycle algorithm, see Figure 4, for efficiently computing (20). It consists of two steps: the downward pass and the upward pass. ... Downward Pass For l = 1, . . . , L : ˇv(t+1) l+1 = σ(ˆv(t) l+1 + Kl+1,lˇv(t+1) l ) (24) Upward Pass For l = L, . . . , 1 : ˆv(t+1) l = σ((Wl + Kl,l)ˇv(t+1) l + Kl,l 1ˆv(t+1) l 1 ). (25) |
| Open Source Code | Yes | The code is available at https://github.com/zongyi-li/graph-pde and https://github. com/zongyi-li/fourier_neural_operator. |
| Open Datasets | No | To create the dataset used for training, solutions to (38) are obtained by numerical integration using the Green s function on a uniform grid with 85 collocation points. We use N = 1000 training examples. To create the dataset used for training, solutions to (39) are obtained using a second-order finite difference scheme on a uniform grid of size 421 421. All other resolutions are downsampled from this data set. We use N = 1000 training examples. To create the dataset used for training, solutions to (41) are obtained using a pseudo-spectral split step method... We use N = 1000 training examples. To create the dataset used for training, solutions to (44) are obtained using a pseudo-spectral split step method... Data is obtained on a uniform 256 256 grid and all other resolutions are subsampled from this data set. |
| Dataset Splits | Yes | Unless otherwise specified, we use N = 1000 training instances and 200 testing instances. |
| Hardware Specification | Yes | All the computations are carried on a single Nvidia V100 GPU with 16GB memory. The computations presented here were conducted on the Resnick High Performance Cluster at the California Institute of Technology. |
| Software Dependencies | No | We use the Adam optimizer to train for 500 epochs with an initial learning rate of 0.001 that is halved every 100 epochs. We set the channel dimensions dv0 = = dv3 = 64 for all one-dimensional problems and dv0 = = dv3 = 32 for all two-dimensional problems. The kernel networks κ(0), . . . , κ(3) are standard feed-forward neural networks with three layers and widths of 256 units. |
| Experiment Setup | Yes | We use the Adam optimizer to train for 500 epochs with an initial learning rate of 0.001 that is halved every 100 epochs. We set the channel dimensions dv0 = = dv3 = 64 for all one-dimensional problems and dv0 = = dv3 = 32 for all two-dimensional problems. The kernel networks κ(0), . . . , κ(3) are standard feed-forward neural networks with three layers and widths of 256 units. |