Factorized Fourier Neural Operators

Authors: Alasdair Tran, Alexander Mathews, Lexing Xie, Cheng Soon Ong

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On several challenging benchmark PDEs on regular grids, structured meshes, and point clouds, the F-FNO can scale to deeper networks and outperform both the FNO and the geo-FNO, reducing the error by 83% on the Navier-Stokes problem, 31% on the elasticity problem, 57% on the airfoil flow problem, and 60% on the plastic forging problem.
Researcher Affiliation Academia Alasdair Tran1 Alexander Mathews 1 Lexing Xie 1 Cheng Soon Ong 1,2 1 Australian National University 2 Data61, CSIRO
Pseudocode No No structured pseudocode or algorithm blocks labeled as 'Pseudocode' or 'Algorithm' were found.
Open Source Code Yes Code, datasets, and pre-trained models are available1. 1https://github.com/alasdairtran/fourierflow
Open Datasets Yes Torus Li is publicly released by Li et al. (2021a) and is used to benchmark our model against the original FNO. ... Finally, we regenerate Torus Kochkov (Fig. 1a) using the same settings provided by Kochkov et al. (2021)... The Elasticity, Airfoil, and Plasticity datasets (final three rows in Table 1) are taken from Li et al. (2022).
Dataset Splits Yes Table A.1: An overview of the four fluid dynamics datasets on regular grids. Our newly generated datasets, Torus Vis and Torus Vis Force, contain simulation data with a more variety of viscosities and forces than Torus Li (Li et al., 2021a) and Torus Kochkov (Kochkov et al., 2021). Note that Li et al. (2021a) did not generate a validation set. ... Table A.2: An overview of the three PDE datasets on irregular geometries. These datasets were generated by Li et al. (2022).
Hardware Specification Yes Models are implemented in Py Torch (Paszke et al., 2017) and trained on a single Titan V GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2017)' and 'Adam optimizer (Kingma & Ba, 2015)' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes warming up the learning rate to 2.5 10 3 for the first 500 steps and then decaying it using the cosine function (Loshchilov & Hutter, 2017). We use Re LU as our non-linear activation function, clip the gradient value at 0.1, and use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999, ϵ = 10 8. The weight decay factor is set to 10 4 and is decoupled from the learning rate (Loshchilov & Hutter, 2019).