Amortized Fourier Neural Operators
Authors: Zipeng Xiao, Siqi Kou, Hao Zhongkai, Bokai Lin, Zhijie Deng
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we validate the effectiveness of our proposed method by conducting extensive experiments on challenging benchmarks governed by typical solid and fluid PDEs. |
| Researcher Affiliation | Academia | 1 Qing Yuan Research Institute, SEIEE, Shanghai Jiao Tong University 2 Dept. of Comp. Sci. & Tech., Tsinghua University |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] |
| Open Datasets | Yes | We evaluate the performance of AM-FNO on six well-established benchmarks. These benchmarks include Burger, Darcy, and NS-2D, which are presented in regular grids with varying dimensions [18]. We extend our experiments to assess the method s performance in different geometries, including Pipe, Airfoil, and Elasticity benchmarks [17] Additionally, we incorporate the compressible fluid dynamics (CFD) 1D and 2D benchmarks [30]. |
| Dataset Splits | No | Table 1 provides 'Ntrain' and 'Ntest' counts, but there is no explicit mention of a separate validation set or its split details. |
| Hardware Specification | Yes | The batch size is selected from {4, 8, 16, 32}, and the experiments are conducted on a single 4090 GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam W optimizer' and 'Gaussian Error Linear Unit (GELU)' but does not provide specific version numbers for any libraries or frameworks (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | We train all models for 500 epochs using the Adam W optimizer [23] with a cosine annealing scheduler [22]. The initial learning rate is 10 3, and the weight decay is set to 10 4. Our models consist of 4 layers with a width of 32 and process all the frequency modes of training data. The Gaussian Error Linear Unit (GELU) is used as the activation function [13]. For AM-FNO (KAN), the number of spline grids is selected from {24, 32, 48}, while for AM-FNO (MLP), the number of basis functions is set to 32 or 48. AM-FNO (MLP) utilizes Chebyshev basis functions as the orthogonal basis functions, as elaborated in the appendix. The batch size is selected from {4, 8, 16, 32}, and the experiments are conducted on a single 4090 GPU. |