Infinite-Fidelity Coregionalization for Physical Simulation

Authors: Shibo Li, Zheng Wang, Robert Kirby, Shandian Zhe

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the advantage of our method in several benchmark tasks in computational physics. For evaluation, we tested our method for predicting the solution fields of three benchmark PDEs, including Poisson s, Heat and Burger s equations. We also applied IFC in topology structure optimization and computational fluid dynamics (CFDs).
Researcher Affiliation Academia Shibo Li, Zheng Wang, Robert M. Kirby, and Shandian Zhe School of Computing, University of Utah Salt Lake City, UT 84112 {shibo, wzhut, kirby, zhe}@cs.utah.edu
Pseudocode No No structured pseudocode or algorithm blocks were explicitly presented in the paper.
Open Source Code No The paper provides links to competing methods' code (DRC, MFHo GP, DMF) and libraries used (torchdiffeq), but does not provide an explicit link or statement for the open-source code of their own proposed method (IFC) in the main text.
Open Datasets Yes To collect the training data, we run the numerical solvers with several meshes. Denser meshes give examples of higher fidelities. The data generation followed the details as provided in (Wang et al., 2021).
Dataset Splits No The number of training examples for each fidelity (from the lowest to highest) is 100, 50, 20, and 5, respectively. For testing, we generated 128 examples with the highest fidelity. No explicit mention of a separate validation split or dataset.
Hardware Specification No The paper does not specify the exact models or types of hardware (e.g., CPU, GPU, memory) used for running the experiments. It only mentions that methods were implemented by PyTorch.
Software Dependencies No The paper mentions software like PyTorch, MATLAB, and torchdiffeq library, but does not provide specific version numbers for these dependencies.
Experiment Setup Yes For our method, each NN component (φ, β, and γ in Eq. (5), (6) and (9)) employed two hidden layers with tanh as the activation function... We ran ADAM (Kingma and Ba, 2014) to train all the models... We used Reduce LROn Plateau (Al-Kababji et al., 2022) scheduler to adjust the learning rate from [10−3, 10−2]. We set the maximum number of epochs to 5,000...