A composable machine-learning approach for steady-state simulations on high-resolution grids

Authors: Rishikesh Ranade, Chris Hill, Lalit Ghule, Jay Pathak

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed approach is validated on more than 5 steady-state PDEs across different PDE conditions on highly-resolved grids and comparisons are made with the commercial solver, Ansys Fluent as well as 4 other state-of-the-art ML methods. The numerical experiments show that our approach outperforms ML baselines in terms of 1) accuracy across quantitative metrics and 2) generalization to out-of-distribution conditions as well as domain sizes. Additionally, we provide results for a large number of ablations experiments conducted to highlight components of our approach that strongly influence the results.
Researcher Affiliation Industry Rishikesh Ranade Office of CTO Ansys Inc. Canonsburg, PA 15317 rishikesh.ranade@ansys.com Chris Hill Fluids Business Unit Ansys Inc. Lebanon, NH, 03766 chris.hill@ansys.com Lalit Ghule Office of CTO Ansys Inc. Canonsburg, PA, 15317 lalit.ghule@ansys.com Jay Pathak Office of CTO Ansys Inc. San Jose, CA, 95134 jay.pathak@ansys.com
Pseudocode Yes Algorithm 1: Solution methodology of Co MLSim approach 1 Domain Decomposition: Computational domain Ω Subdomains Ωc 2 Initialize solution on all Ωc: p( x) = 0.0 for all x Ωc 3 Encode solutions on Ωc: η p = eu( p(Ωc)) 4 Encode conditions on Ωc: 5 ηg = eg(g(Ωc)), ηb = eb(b(Ωc)), ηs = es(s(Ωc)) 6 Set convergence tolerance: ϵt = 1e 8 7 while ϵ > ϵt do 8 for Ωc Ωdo 9 Gather neighbors of Ωc: Ωnb = [Ωc, Ωleft, Ωright, ...] 10 η p = Θ(ηnb p , ηnb b , ηnb g , ηnb s ) 11 Compute L2 norm: ϵ = ||η p η p||2 2 12 Update: η p η p for all Ωc Ω 13 Decode PDE solution on all Ωc: p = g p(η pΩc))
Open Source Code No No. The source code is proprietary. But, we will provide the data and code for the main experiment once the paper is accepted.
Open Datasets No The paper states that solutions for the experiments are generated using Ansys Fluent (e.g., '256 training and 100 testing solutions are generated using Ansys Fluent.'), but does not provide any information or links for public access to these generated datasets.
Dataset Splits Yes In this case, 256 training and 100 testing solutions are generated using Ansys Fluent.
Hardware Specification No The paper states in its checklist that hardware details are provided in the main paper and supplementary materials, but the main paper provided does not specify exact GPU/CPU models or other detailed hardware specifications.
Software Dependencies No The paper mentions commercial software like Ansys Fluent and other ML baselines (UNet, FNO, Deep ONet) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes Yes, all the details are provided in the main paper as well as supplementary materials.