Bayesian Optimization with Cost-varying Variable Subsets

Authors: Sebastian Tay, Chuan Sheng Foo, Daisuke Urano, Richalynn Leong, Bryan Kian Hsiang Low

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically show that our proposed algorithm can find significantly better solutions than comparable baselines with the same budget. We empirically evaluate the performance of our proposed algorithm against the baselines under several experimental settings with synthetic and real-world datasets (Sec. 5), including a plant growth dataset and an airfoil self-noise dataset corresponding, respectively, to the precision agriculture and advanced manufacturing use cases motivated earlier in this section.
Researcher Affiliation Collaboration Sebastian Shenghong Tay1 2, Chuan Sheng Foo2 3, Daisuke Urano4, Richalynn Chiu Xian Leong4, Bryan Kian Hsiang Low1 1Department of Computer Science, National University of Singapore 2Institute for Infocomm Research (I2R), A*STAR, Singapore 3Centre for Frontier AI Research (CFAR), A*STAR, Singapore 4Temasek Life Sciences Laboratory, Singapore
Pseudocode Yes Algorithm 1 UCB-CVS
Open Source Code Yes The code for the experiments may be found at https://github.com/sebtsh/bocvs.
Open Datasets Yes We use the airfoil self-noise dataset from the UCI Machine Learning Repository [9].
Dataset Splits No No specific training/validation/test dataset splits are provided (e.g., percentages or counts). The paper mentions using
Hardware Specification Yes The following CPU times in seconds were collected on a server running Ubuntu 20.04.4 LTS with 2 Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz and 256 GB of RAM.
Software Dependencies No The paper mentions software like Num Py, Sci Py, Py Torch, GPy Torch, and Bo Torch, but does not specify their version numbers.
Experiment Setup Yes For the experiments, we set βt = 2 for all t. For TS-PSQ, Following [13], we use random Fourier features (RFF) [28] to approximately sample from a GP posterior. We use RFF with 1024 features. EI-PSQ: We adapt the Bo Torch acquisition Noisy Expected Improvement to the BOPSQ problem setting. To evaluate the acquisition score of a partial query, we first sample 32 fantasy models of f from the GP posterior. For each fantasy model, we compute the expected value of the partial query and take the best value as the value of the best observation so far (assuming the full query control set is available). We then compute the improvement score as the expected value minus the best value, and then average the improvement score over all fantasy models.