Batch Multi-Fidelity Active Learning with Budget Constraints

Authors: Shibo Li, Jeff M Phillips, Xin Yu, Robert Kirby, Shandian Zhe

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For evaluation, we examined BMFAL-BC in five real-world applications, including three benchmark tasks in physical simulation (solving Poisson s, Heat and viscous Burger s equations), a topology structure design problem, and a computational fluid dynamics (CFD) task to predict the velocity field of boundary-driven flows. We compared with the budget-aware version of DMFAL, single multi-fideity querying with our acquisition function, and several random querying strategies. Under the same budget constraint, our method consistently outperforms the competing methods throughout the learning process, often by a large margin.
Researcher Affiliation Academia Shibo Li , Jeff M. Phillips , Xin Yu, Robert M. Kirby, and Shandian Zhe School of Computing, University of Utah Salt Lake City, UT 84112 {shibo, jeffp, xiny, kirby, zhe}@cs.utah.edu
Pseudocode Yes Algorithm 1 Weighted-Greedy( {λm}, budget B)
Open Source Code No The paper does not provide a direct link to source code or an explicit statement in the main body that the code for their methodology is released. While the checklist indicates
Open Datasets No The paper describes generating its own training and test data using numerical solvers and meshes, rather than using publicly available datasets with specific access information. For example,
Dataset Splits No The paper describes how initial training data was collected and how evaluation was performed, but it does not specify explicit training/validation/test dataset splits (e.g., percentages or sample counts for each split) or refer to predefined splits from standard benchmarks.
Hardware Specification No The paper mentions that
Software Dependencies No The paper mentions
Experiment Setup Yes All the methods were implemented by Pytorch (Paszke et al., 2019). We followed the same setting as in Li et al. (2022) to train the deep multi-fidelity model (see Sec. 2.2), which employed a two-layer NN at each fidelity, tanh activation, and the layer width was selected from {20, 40, 60, 80, 100} from the initial training data. The dimension of the latent output was 20. The learning rate was tuned from {10 4, 5 10 4, 10 3, 5 10 3, 10 2}. We set the budget for acquiring each batch to 20 (normalized seconds), and ran each method to acquire 25 batches of training examples.