Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Discretization-invariance? On the Discretization Mismatch Errors in Neural Operators

Authors: Wenhan Gao, Ruichen Xu, Yuefan Deng, Yi Liu

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct experiments to substantiate our claims and demonstrate the effectiveness of our proposed CROP components. In Sec. 5.1, we conduct experiments on the incompressible Navier-Stokes equation to showcase the accumulation of DMEs and verify our CROP components ability to perform cross-resolution tasks. In Sec. 5.2, we conduct experiments to demonstrate the efficient multi-spatio-scale learning capability of our CROP components.
Researcher Affiliation Academia 1 Department of Applied Mathematics and Statistics, Stony Brook University 2 Department of Computer Science, Stony Brook University
Pseudocode No The paper describes methods using mathematical formulations and descriptive text, but it does not include a clearly labeled pseudocode block or algorithm section.
Open Source Code Yes The code is publicly available at https://github.com/wenhangao21/ICLR25-CROP. We provide the source code, datasets, pre-trained models, and configuration necessary to replicate the key experiments at https://github.com/wenhangao21/ICLR25-CROP.
Open Datasets Yes This dataset is generated using the data generation scripts provided by the authors of Li et al. (2021) in their Git Hub repository. The PDE is solved using numerical solvers on a 256 x 256 grid. We learn the non-linear operator mapping a -> u following the exact same setup and data as in Li et al. (2021) and the operator mapping f -> u following the generation setup in Hasani & Ward (2024). We directly use the data provided by Raonic et al. (2023) and follow their setups. The dataset is provided by De Hoop et al. (2022) directly, where this PDE is solved using a finite element method on a 100 x 100 grid.
Dataset Splits Yes For Reynolds numbers 5,000, the dataset contains 1,024 trajectories and follows a 768/128/128 split. As higher Reynolds number leads to a more difficult learning task, for Reynolds numbers 10,000, the dataset contains 2,048 trajectories and follows a 1792/128/128 split. The data is downsampled to 64 x 64 for training, and the train/val/test data follows a 800/100/100 split.
Hardware Specification Yes All the experimental results, especially the timing results, are recorded on NVIDIA RTX A6000 with 48 GB GDDR6.
Software Dependencies No The paper mentions that code is provided on GitHub with configurations and instructions, but it does not explicitly state specific version numbers for software dependencies (e.g., Python, PyTorch, CUDA versions) within the main text of the paper.
Experiment Setup Yes All models are trained using the hyperparameters in Table 9, except the ones that the original work provides the hyperparameter settings, such as FNO with the Darch flow and the Navier Stokes equation with low Reynolds number, with early stopping implemented if the validation error does not improve over a specified number of epochs. One exception in which we do not apply early stopping is the nonlinear Darcy example, the Helmholtz example, and the Navier Stokes equation with low Reynolds number; we follow the original setups from Li et al. (2021), and there is no validation set. Table 9: Training Hyper-parameters: Learning Rate 0.001; 0.0005 for Res Net; 0.0001 for Deep ONet, Weight Decay 1e-6, Scheduler Step 10, Scheduler Gamma 0.98, Epochs 1000; 2000 for Deep ONet, Batch Size 16, Patience for Early Stopping 100; 500 for Deep ONet.