Learning Interface Conditions in Domain Decomposition Solvers
Authors: Ali Taghibakhshi, Nicolas Nytko, Tareq Uz Zaman, Scott MacLachlan, Luke Olson, Matthew West
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance of the learned linear solvers is compared with both classical and optimized domain decomposition algorithms, for both structured- and unstructured-grid problems. |
| Researcher Affiliation | Academia | Ali Taghibakhshi Mechanical Science and Engineering University of Illinois Urbana-Champaign Urbana, IL 61801, USA alit2@illinois.edu Nicolas Nytko Computer Science University of Illinois Urbana-Champaign Urbana, IL 61801, USA nnytko2@illinois.edu Tareq Zaman Scientific Computing Program Memorial University of Newfoundland and Labrador St. John s, NL, Canada tzaman@mun.ca Scott Mac Lachlan Mathematics and Statistics Memorial University of Newfoundland and Labrador St. John s, NL, Canada smaclachlan@mun.ca Luke Olson Computer Science University of Illinois Urbana-Champaign Urbana, IL 61801, USA lukeo@illinois.edu Matthew West Mechanical Science and Engineering University of Illinois Urbana-Champaign Urbana, IL 61801, USA mwest@illinois.edu |
| Pseudocode | No | The paper describes methods and processes but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | All code and data for this paper is at https://github.com/compdyn/learning-oras (MIT licensed). |
| Open Datasets | No | The training set in our study consists of 1000 unstructured grids with piecewise linear finite elements and with grids ranging from 90–850 nodes (and an average of 310 nodes). The grids are generated by choosing either a regular grid (randomly selected 60% of the time) or a randomly generated convex polygon; pygmsh [33] is used to generate the mesh on the polygon interior. |
| Dataset Splits | No | The paper mentions a "training set" and "testing set" but does not explicitly describe a separate validation split or how it was used for model selection during training. |
| Hardware Specification | Yes | All training was performed on an 8-core i9 Macbook Pro using CPU only. |
| Software Dependencies | No | The code was implemented using Py Torch Geometric [35], Py AMG [36], and Network X [37]. Specific version numbers for these libraries are not provided. |
| Experiment Setup | Yes | We train the GNN for four epochs with a mini batch size of 25 using the ADAM optimizer [34] with a fixed learning rate of 10−4. For the numerical evaluation of the loss function (10) we use K = 4 iterations and m = 500 samples. |