Learning to Solve PDE-constrained Inverse Problems with Graph Networks
Authors: Qingqing Zhao, David B Lindell, Gordon Wetzstein
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that GNNs combined with autodecoder-style priors are well-suited for these tasks, achieving more accurate estimates of initial conditions or physical parameters than other learned approaches when applied to the wave equation or Navier Stokes equations. We also demonstrate computational speedups of up to 90 using GNNs compared to principled solvers. |
| Researcher Affiliation | Academia | Qingqing Zhao 1 David B. Lindell 1 Gordon Wetzstein 1 1Stanford University. |
| Pseudocode | No | The paper provides mathematical equations and describes the steps of the model in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code open, nor does it include a link to a code repository. |
| Open Datasets | No | The paper states: "The training dataset is composed of 1100 simulated time-series trajectories using 37 separate meshes... supervised on a dataset of ground truth wave equation solutions generated with an open source FEM solver (FEni CS (Logg et al., 2012))." While they use an open-source solver to *generate* their dataset, they do not provide direct access (link, DOI, specific repository) to their *generated dataset* itself. |
| Dataset Splits | Yes | The training dataset is composed of 1100 simulated time-series trajectories using 37 separate meshes. We evaluate on 40 held-out trajectories across 3 different meshes. [...] Our dataset consists of 850 training trajectories on 55 meshes, and 50 test trajectories on 5 meshes. |
| Hardware Specification | Yes | To obtained the average runtime per optimization iteration, we run our approaches on 8 CPU cores (FEM solver) and a single Quadro RTX 6000 GPU (U-Net and GNN). |
| Software Dependencies | No | The paper mentions software like FEniCS, ADAM optimizer, U-Net, and GNN, often citing a paper for their source or method (e.g., "FEni CS (Logg et al., 2012)" or "ADAM optimizer (Kingma & Ba, 2014)"). However, it does not provide specific version numbers for these software components (e.g., FEniCS 2019.1 or PyTorch 1.9). |
| Experiment Setup | Yes | We train the network with ADAM optimizer (Kingma & Ba, 2014) for 1500 epochs using a batch size of 32, and a learning rate of 5e-4. (Appendix A.2, Prior Network). We train the network for 500 epochs using the ADAM optimizer (Kingma & Ba, 2014), learning rate 0.0004 and batch size 10. (Appendix A.3, U-Net). We train the network using the ADAM optimizer (Kingma & Ba, 2014) with a learning rate decay exponentially from 1e-4 to 1e-8 over 500 epochs. (Appendix A.3, GNN). |