A Unified Framework for Bayesian Optimization under Contextual Uncertainty
Authors: Sebastian Shenghong Tay, Chuan-Sheng Foo, Daisuke Urano, Richalynn Leong, Bryan Kian Hsiang Low
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We develop a general Thompson sampling algorithm that is able to optimize any objective within the BOCU framework, analyze its theoretical properties, and compare it to suitable baselines across different experimental settings and uncertainty objectives. and 5 EXPERIMENTS |
| Researcher Affiliation | Academia | 1Department of Computer Science, National University of Singapore 2Institute for Infocomm Research (I2R), A*STAR, Singapore 3Centre for Frontier AI Research (CFAR), A*STAR, Singapore 4Temasek Life Sciences Laboratory, Singapore |
| Pseudocode | Yes | Algorithm 1 TS-BOCU |
| Open Source Code | Yes | The source code for the experiments (along with all datasets) is provided in the supplementary material (available online at https://github.com/sebtsh/unified-framework-BOCU) for full reproducibility of the experimental results. |
| Open Datasets | Yes | a plant growth simulator constructed from real-world data where the decision and context variables are the p H and concentration of NH3 of the nutrient medium respectively, and the output is the maximum leaf area of a plant (Tay et al., 2022); and 4) a COVID-19 epidemic model from Frazier et al. (2022) |
| Dataset Splits | No | The paper describes experimental settings for Bayesian Optimization, which involves sequential data acquisition from functions or simulators. It does not provide traditional train/validation/test dataset splits as would be typical for a supervised learning setup with a fixed dataset. |
| Hardware Specification | No | The experiments were implemented in Python using Num Py (Harris et al., 2020), Py Torch (Paszke et al., 2019), GPy Torch (Gardner et al., 2018) and Bo Torch (Balandat et al., 2020). |
| Software Dependencies | Yes | The experiments were implemented in Python using Num Py (Harris et al., 2020), Py Torch (Paszke et al., 2019), GPy Torch (Gardner et al., 2018) and Bo Torch (Balandat et al., 2020). |
| Experiment Setup | Yes | We set the number of decisions |X| = 1024, and the number of contexts |C| = n = 64. The reference distribution pt at all iterations is a Gaussian with mean 0.5 1ℓand covariance 0.2 Iℓ... The true distribution p t at all iterations is a uniform distribution over [0, 1]ℓ... The margin at all iterations is d(pt, p t ). During the learning procedure, we use a GP with mean 0 and a ARD squared exponential kernel with k(x, c; x, c) = 1 and lengthscale 0.1 for each dimension. We set the observational noise standard deviation σ = 0.01, and the number of initial observations at the start of each learning procedure to be 5. For TS-BOCU, we approximate sampling from the posterior using random Fourier features (Rahimi & Recht, 2007) with 1024 features. For all UCB algorithms, we set βt = 2 for all iterations. |