Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains

Authors: Levi E. Lingsch, Mike Yan Michelis, Emmanuel De Bezenac, Sirani M. Perera, Robert K. Katzschmann, Siddhartha Mishra

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental With extensive empirical evaluation, we demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines while retaining or improving the accuracy of Fourier neural operators (FNOs) and related neural operators. 3. Experimental Results In this section, our aim is to investigate the performance of the presented Direct Spectral Evaluations (DSE) within various neural operator architectures on a challenging suite of diverse PDE tasks.
Researcher Affiliation Academia 1Seminar for Applied Mathematics, ETH Zurich, Switzerland 2ETH AI Center, ETH Zurich, Switzerland 3Soft Robotics Lab, ETH Zurich, Switzerland 4Department of Mathematics, Embry Riddle Aeronautical University, Daytona Beach, FL, USA.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes *Source code available on Git Hub: https://github. com/camlab-ethz/DSE-for-Neural Operators
Open Datasets Yes The training and test data, presented in (Li et al., 2020a) for this problem, is used. ... To this end, we use the (MERRA-2) satellite data to forecast the surface-level specific humidity (Earth Data, 2021 2023).
Dataset Splits No The paper mentions using a 'validation set' to minimize error (SM A.3), but does not specify concrete dataset split information (percentages, counts, or explicit methodology) for training, validation, or testing needed for reproduction.
Hardware Specification Yes All experiments are performed on the Nvidia Ge Force RTX 3090 with 24GB memory.
Software Dependencies No The paper mentions using 'Py Torch' as a key machine learning library but does not provide specific version numbers for it or any other ancillary software components.
Experiment Setup No The paper mentions using a simple grid search for hyperparameters and training until convergence with L1-loss, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, specific optimizer settings, number of epochs) for the experiments in the main text.