GAR: Generalized Autoregression for Multi-Fidelity Fusion

Authors: Yuxin Wang, Zheng Xing, WEI XING

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The empirical assessment includes many canonical PDEs and real scientific examples and demonstrates that the proposed method consistently outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE) with only a couple high-fidelity training samples.
Researcher Affiliation Collaboration Yuxin Wang School of Mathematical Science Beihang University Beijing, China, 100191. WYXtt_2011@163.com Zheng Xing Graphics&Computing Department Rockchip Electronics Co., Ltd Fuzhou, China, 350003 zheng.xing@rock-chips.com Wei W. Xing School of Mathematics and Statistics, University of Sheffield, Sheffield S10 2TN, UK School of Integrated Circuit Science and Engineering, Beihang University, Beijing, China, 100191. wayne.xingle@gmail.com
Pseudocode No The paper describes algorithms and derivations but does not present them in a formal pseudocode block or algorithm environment.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Please see supplementary materials
Open Datasets Yes We test on Burger s, Poisson s and the heat equations commonly used in the literature [12, 51 53]. ... Did you use existing assets? If yes, did you cite the creators? [Yes] Please see the experimental section
Dataset Splits Yes We uniformly generate 128 samples for testing and 32 for training. We increase the high-fidelity training samples to the number of low-fidelity training samples 32. The comparisons are conducted five times with shuffled samples.
Hardware Specification Yes All experiments are run on a workstation with an AMD 5950x CPU and 32 GB RAM.
Software Dependencies No GAR, CIGAR, AR, NAR, and Res GP are implemented using Pytorch. While PyTorch is mentioned, a specific version number is not provided, which is necessary for reproducibility.
Experiment Setup Yes We uniformly generate 128 samples for testing and 32 for training. We increase the high-fidelity training samples to the number of low-fidelity training samples 32. The comparisons are conducted five times with shuffled samples. ... Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Please see the Appendix