Deep Equilibrium Based Neural Operators for Steady-State PDEs
Authors: Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary Lipton, Jianfeng Lu, Andrej Risteski
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments indicate that FNO-DEQ-based architectures outperform FNO-based baselines with 4 the number of parameters in predicting the solution to steady-state PDEs such as Darcy Flow and steady-state incompressible Navier-Stokes. Further, we show a universal approximation result that demonstrates that FNO-DEQ can approximate the solution to any steady-state PDE that can be written as a fixed point equation. 5 Experiments |
| Researcher Affiliation | Collaboration | 1Carnegie Mellon University 2 Bosch Center for AI 3 Duke University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to its own source code. It only mentions releasing a dataset and using third-party code (JAX-CFD). |
| Open Datasets | Yes | For experiments with Darcy Flow, we use the dataset provided by Li et al. [2020a] |
| Dataset Splits | No | The paper provides training and testing sample counts ('1024 data samples and tested on 500 samples' for Darcy Flow; '4500 training samples and 500 testing samples' for Navier-Stokes) but does not explicitly mention or specify a validation dataset split. |
| Hardware Specification | Yes | We run all our experiments on a combination of NVIDIA RTX A6000, NVIDIA Ge Force RTX 2080 Ti and 3080 Ti. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'JAX-CFD package' but does not provide specific version numbers for these or other key software dependencies required for reproducibility. |
| Experiment Setup | Yes | We train all the networks for 500 epochs with Adam optimizer. The learning rate is set to 0.001 for Darcy flow and 0.005 for Navier-Stokes. We use learning rate weight decay of 1e-4 for both Navier-Stokes and Darcy flow. The batch size is set to 32. ... The maximum number of Anderson solver steps is kept fixed at 32 for Dary Flow, and 16 for Navier Stokes. ... We use τ = 0.5 and S = 1 for Darcy Flow, and τ = 0.8 and S = 3 for Navier-Stokes. |